SNMP Dilemma oder das Problem mit dem Index

Im SNMP-Baum gibt es Einträge die den Portstatus definieren in der Form:

root@zabbix:~ # snmpwalk -v2c -cpublic 192.168.0.1 1.3.6.1.2.1.2.2.1.7

IF-MIB::ifAdminStatus.805306369 = INTEGER: up(1)

IF-MIB::ifAdminStatus.805306370 = INTEGER: up(1)

IF-MIB::ifAdminStatus.805306371 = INTEGER: down(2)

IF-MIB::ifAdminStatus.805306372 = INTEGER: down(2)

IF-MIB::ifAdminStatus.805306373 = INTEGER: down(2)

IF-MIB::ifAdminStatus.805306374 = INTEGER: down(2)

IF-MIB::ifAdminStatus.1073741824 = INTEGER: up(1)

IF-MIB::ifAdminStatus.1073741825 = INTEGER: up(1)

IF-MIB::ifAdminStatus.1073741826 = INTEGER: up(1)

IF-MIB::ifAdminStatus.1073741827 = INTEGER: down(2)

IF-MIB::ifAdminStatus.1073741828 = INTEGER: down(2)

Der Index ist hier also 805306369.

Dummerweise liegen manche Informationen im SNMP-Baum nicht so vor, dass sie über den Index

angesprochen werden können wie z.B. RX/TX-Value der Interfaces

hp knowhow

Snapshot vs. HP Business Copy

Ein Snapshot eines Datenbestandes steht sofort zur Verfügung. Sämtliche Änderungen ab dem Zeitpunkt der Snapshot-Erstellung landen

im Snapshot und werden ergänzt. Lese/Schreiboperationen belasten die Performance.

Bei einer BC spalten sich zwei Systeme in sync auf und entfernen sich zunehmend vom Partner. Zum Zeitpunkt einer Business-Copy

Erstellung nähert sich der sekundäre Partner dem primären Partner wieder an bis beide identisch sind.

HP Continuous Access ist die Synchronisationslösung von HP. Hierbei werden Daten in Echtzeit zwischen SAN-Arrays kopiert. Jedes

SAN-Array nutzt eine lokale History Log um async Schreiboperationen temporär zu puffern. Beim synchronen Schreibvorgang wird der

History Log nur genutzt, wenn die Sync-Verbindung abbricht.

Danke Henning für die Infos.

Quelle: http://h20628.www2.hp.com/km-ext/kmcsdirect/emr_na-c02838340-4.pdf

zbxsmokeping patch – invalid parameter -v

38,41c38,41

< $ZBXSENDER -z $ZBXSERVER -p 10051 -s $HOSTNAME -k SmokLoos -o ${tab[0]} -v | grep "Failed 1"

< $ZBXSENDER -z $ZBXSERVER -p 10051 -s $HOSTNAME -k SmokLatencyMin -o ${tab[1]} -v | grep "Failed 1"

< $ZBXSENDER -z $ZBXSERVER -p 10051 -s $HOSTNAME -k SmokLatencyMax -o ${tab[3]} -v | grep "Failed 1"

< $ZBXSENDER -z $ZBXSERVER -p 10051 -s $HOSTNAME -k SmokLatencyAvg -o ${tab[2]} -v | grep "Failed 1"

---

> if [ ! -z “${tab[0]}” ]; then $ZBXSENDER -z $ZBXSERVER -p 10051 -s $HOSTNAME -k SmokLoos -o ${tab[0]} -v | grep “Failed 1″; fi

> if [ ! -z “${tab[1]}” ]; then $ZBXSENDER -z $ZBXSERVER -p 10051 -s $HOSTNAME -k SmokLatencyMin -o ${tab[1]} -v | grep “Failed 1″; fi

> if [ ! -z “${tab[3]}” ]; then $ZBXSENDER -z $ZBXSERVER -p 10051 -s $HOSTNAME -k SmokLatencyMax -o ${tab[3]} -v | grep “Failed 1″; fi

> if [ ! -z “${tab[2]}” ]; then $ZBXSENDER -z $ZBXSERVER -p 10051 -s $HOSTNAME -k SmokLatencyAvg -o ${tab[2]} -v | grep “Failed 1″; fi

ESzabbix.py – Patch für ElasticSearch 1.4

http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/cluster.html

https://github.com/serialsito/Elasticsearch-zabbix/issues/4

Der Patch hier:

— managers.py.orig 2015-02-11 14:20:49.428124337 +0100
+++ managers.py 2015-02-11 14:53:30.283293505 +0100
@@ -495,11 +495,11 @@
if all_nodes:
path = make_path(‘_shutdown’)
elif master:
– path = make_path(“_cluster”, “nodes”, “_master”, “_shutdown”)
+ path = make_path(“_nodes”, “_master”, “_shutdown”)
elif nodes:
– path = make_path(“_cluster”, “nodes”, “,”.join(nodes), “_shutdown”)
+ path = make_path(“_nodes”, “,”.join(nodes), “_shutdown”)
elif local:
– path = make_path(“_cluster”, “nodes”, “_local”, “_shutdown”)
+ path = make_path(“_nodes”, “_local”, “_shutdown”)
if delay:
try:
int(delay)
@@ -574,7 +574,7 @@
list of indices to include in the response.

“”"
– path = make_path(“_cluster”, “state”)
+ path = make_path(“_”, “state”)
parameters = {}

if filter_nodes is not None:
@@ -602,7 +602,7 @@
The cluster :ref:`nodes info ` API allows to retrieve one or more (or all) of
the cluster nodes information.
“”"
– parts = ["_cluster", "nodes"]
+ parts = ["_nodes"]
if nodes:
parts.append(“,”.join(nodes))
path = make_path(*parts)
@@ -622,9 +622,9 @@
The cluster :ref:`nodes info ` API allows to retrieve one or more (or all) of
the cluster nodes information.
“”"
– parts = ["_cluster", "nodes", "stats"]
+ parts = ["_nodes", "stats"]
if nodes:
– parts = ["_cluster", "nodes", ",".join(nodes), "stats"]
+ parts = ["_nodes", ",".join(nodes), "stats"]

path = make_path(*parts)
return self.conn._send_request(‘GET’, path)

Zabbix Postgresql Backup ohne historische Daten

# Sichern mit Ausnahme von großen Tabellen (-T …)

echo “`date` : DB ohne hist. Daten sichern beginnt” >> $logfile

/usr/pgsql-9.2/bin/pg_dump –host localhost –port 5432 –username “postgres” –no-password -T acknowledges* -T alerts* -T audit* -T events* -T history* -T trends* –verbose –file “/media/nfs_backup/$datum-zabbix_datensicherung_db_ohne_hist_daten.sql” “zabbix”

# Sichern der nur großen Tabellen, jedoch nur Schema ohne Daten (-t –schema-only)

/usr/pgsql-9.2/bin/pg_dump –host localhost –port 5432 –username “postgres” –no-password –schema-only -t acknowledges* -t alerts* -t audit* -t events* -t history* -t trends* –verbose ‘zabbix’ >> “/media/nfs_backup/$datum-zabbix_datensicherung_db_ohne_hist_daten.sql”

SELinux know how

Debugging von SELinux Policy-Problemen

Hat man eine Anwendung, die mit aktivierter SELinux enforce Probleme bereitet, können in der /var/log/audit/audit.log die Violations beobachtet werden.

Ein optimaler Testablauf ist:

SE Policy auf Permissive setzen – jetzt werden alle Blockierungen protokolliert – jedoch aber dann erlaubt

Jetzt kann der Entwickler/Anwender seine Anwendung ausgiebig testen

Abschließend können alle Violations in /var/log/audit.log an audit2allow verfüttert werden

Danke an Uwe für die Erklärung!

Elastic-Search ESzabbix – zabbix – patch

Gepachte ESzabbix.py Version – Entwickler wurde darüber informiert – jedoch keine Rückmeldung – hier der Code:

#!/usr/bin/env python

# Created by Aaron Mildenstein on 19 SEP 2012

from pyes import *
import sys

# Define the fail message
def zbx_fail():
print "ZBX_NOTSUPPORTED"
sys.exit(2)

searchkeys = ['query_total', 'fetch_time_in_millis', 'fetch_total', 'fetch_time', 'query_current', 'fetch_current', 'query_time_in_millis']
getkeys = ['missing_total', 'exists_total', 'current', 'time_in_millis', 'missing_time_in_millis', 'exists_time_in_millis', 'total']
docskeys = ['count', 'deleted']
indexingkeys = ['delete_time_in_millis', 'index_total', 'index_current', 'delete_total', 'index_time_in_millis', 'delete_current']
storekeys = ['size_in_bytes', 'throttle_time_in_millis']
cachekeys = ['filter_size_in_bytes', 'field_size_in_bytes', 'field_evictions']
clusterkeys = searchkeys + getkeys + docskeys + indexingkeys + storekeys
returnval = None
# build hostname
hostname = sys.argv[1] + ':9200'
# __main__

# We need to have two command-line args:
# sys.argv[1]: The node name or "cluster"
# sys.argv[2]: The "key" (status, filter_size_in_bytes, etc)

if len(sys.argv) < 3:
zbx_fail()

# Try to establish a connection to elasticsearch
try:
# connect to user supplied hostname
conn = ES(hostname,timeout=25,default_indices=[''])
except Exception, e:
zbx_fail()

# changed the logic from == to !=
if sys.argv[1] != 'cluster':
if sys.argv[2] in clusterkeys:
nodestats = conn.cluster.node_stats()
subtotal = 0
passcount = 0
for nodename in nodestats['nodes']:
if sys.argv[2] in indexingkeys:
indexstats = nodestats['nodes'][nodename]['indices']['indexing']
elif sys.argv[2] in storekeys:
indexstats = nodestats['nodes'][nodename]['indices']['store']
elif sys.argv[2] in getkeys:
indexstats = nodestats['nodes'][nodename]['indices']['get']
elif sys.argv[2] in docskeys:
indexstats = nodestats['nodes'][nodename]['indices']['docs']
# Docs are cluster-wide, despite the sub-index being "by node". Until this is changed, we have to do this by passcount
passcount += 1
elif sys.argv[2] in searchkeys:
indexstats = nodestats['nodes'][nodename]['indices']['search']
try:
if passcount < 2:
subtotal += indexstats[sys.argv[2]]
except Exception, e:
pass
returnval = subtotal

else:
# Try to pull the managers object data
try:
escluster = managers.Cluster(conn)
except Exception, e:
zbx_fail()
# Try to get a value to match the key provided
try:
returnval = escluster.health()[sys.argv[2]]
except Exception, e:
zbx_fail()
# If the key is "status" then we need to map that to an integer
if sys.argv[2] == 'status':
if returnval == 'green':
returnval = 0
elif returnval == 'yellow':
returnval = 1
elif returnval == 'red':
returnval = 2
else:
zbx_fail()

else: # Not clusterwide, check the next arg

nodestats = conn.cluster.node_stats()
for nodename in nodestats['nodes']:
if sys.argv[1] in nodestats['nodes'][nodename]['name']:
if sys.argv[2] in indexingkeys:
stats = nodestats['nodes'][nodename]['indices']['indexing']
elif sys.argv[2] in storekeys:
stats = nodestats['nodes'][nodename]['indices']['store']
elif sys.argv[2] in getkeys:
stats = nodestats['nodes'][nodename]['indices']['get']
elif sys.argv[2] in docskeys:
stats = nodestats['nodes'][nodename]['indices']['docs']
elif sys.argv[2] in searchkeys:
stats = nodestats['nodes'][nodename]['indices']['search']
elif sys.argv[2] in cachekeys:
stats = nodestats['nodes'][nodename]['indices']['cache']
try:
returnval = stats[sys.argv[2]]
except Exception, e:
pass

# If we somehow did not get a value here, that's a problem. Send back the standard
# ZBX_NOTSUPPORTED
if returnval is None:
zbx_fail()
else:
print returnval

# End