Interrupt Boot process to gain access to a system to change password – RH/Centos7

  1. When Grub appears press E
  2. At the end of the image entry enter “rd.break
  3. Press Ctrl-x
  4. You will now boot into init RAM FS.
  5. Mount the sysroot directory  “mount -oremount,rw /sysroot/
  6. Change root into sysroot “chroot /sysroot/
  7. Change root password “passwd” 
  8. If SELInux is enabled you will need to re-label all files by creating file in the / directory of sysroot called .autorelabel
  9. Exit.
Share Button

virsh – Manage VMs

List all VMs

virst list --all

[root@desktop ~]# virsh list --all
Id Name State
----------------------------------------------------
- centos7.0 shut off
- centos7.0-2 shut off

Create a snapshot

virsh snapshot-create-as --domain centos7.0-2 \
> -- name "Testing"\
> -- description "Testing stuff"\
> -- live

List any snapshots of a VM

virsh snapshot-list --domain centos7.0-2
Name Creation Time State
------------------------------------------------------------
testing--description 2017-10-22 15:35:40 -0400 shutoff

Revert to a snapshot

virsh snapshot-revert centos7.0-2  testing--description

Power up/off a VM

virsh start/shutdown centos7.0-2 

Find IP of VM By using the MAC address

[root@desktop ~]# arp -n
Address                  HWtype  HWaddress           Flags Mask            Iface
192.168.0.1              ether   28:56:5a:e9:3a:0b   C                     enp5s0
192.168.0.8              ether   74:2f:68:f7:32:0e   C                     enp5s0
192.168.124.148          ether   52:54:00:27:86:b6   C                     virbr0
192.168.0.5              ether   70:85:c2:29:cf:a3   C                     enp5s0
192.168.0.4              ether   d0:50:99:09:38:63   C                     enp5s0

[root@desktop ~]# virsh domiflist centos7.0-2
Interface  Type       Source     Model       MAC
-------------------------------------------------------
vnet0      network    default    virtio      52:54:00:27:86:b6

Enable/Disable Auto Start of guest upon boot

[root@desktop ~]# virsh autostart centos7.0-2 
Domain centos7.0-2 marked as autostarted

[root@desktop ~]# virsh autostart centos7.0-2  --disable
Domain centos7.0-2 unmarked as autostarted


Share Button

Centos 7 – Part 5 – HAproxy systemctl script

vi /etc/systemd/system/haproxy
[Service]
ExecStart=/usr/sbin/haproxy
Restart=always
StandardOutput=syslog
StandardError=syslog
SyslogIdentifier=haproxy
User=root
Group=root
Environment=NODE_ENV=production

[Install]
WantedBy=multi-user.target
systemctl enable haproxy
Share Button

Centos 7 – Part 4 – HAProxy Standby with SSL support combined with NGINX Load Balancing

These instructions expand on the previous post. The previous post shows how to implement HAPROXY with SSL in front of two NGINX load balancers with NGINX servers having Fail Over enabled.  This post will show how to create add another HAPROXY server in order to have fail over enabled,

As explained on the previous post, HAPROXY and keepalived needs to be installed.

HAProxy

Configure HAProxy.

The configuration file for server HAPROXY2 is the same as with the configuration file with server HAPROXY1, minus of course the IP address that we bind. Important: You must copy the ssl certificate files from HAPROXY1 to HAPROXY2 server under the directory specified in the config file.

global
        log 127.0.0.1   local0
        log 127.0.0.1   local1 debug
        maxconn   45000 # Total Max Connections.
        daemon
        nbproc      1 # Number of processing cores.
defaults
        timeout server 86400000
        timeout connect 86400000
        timeout client 86400000
        timeout queue   1000s

frontend https_frontend
  bind 10.0.0.53:443 ssl crt /etc/ssl/haproxy1.sfentona.lol/haproxy1.pem
  mode http
  option httpclose
  option forwardfor
  reqadd X-Forwarded-Proto:\ https
  default_backend web_server

backend web_server
  mode http
  balance roundrobin
  cookie SERVERID insert indirect nocache
  server wordpressvirtip 10.0.0.44:80

Configure Keepalived

Keep in mind that we already have keepalived running for the two NGINX load balancers. We have designated them in the keepalived.cfg as virtual_router_id 51. For the HAproxy servers we are going to assign them a different id. Servers HAPROXY1 and HAPROXY2 will now be designated as virtual_router_id 52 .

Keepalived config file for HAPROXY1

vrrp_script chk_haproxy {           # Requires keepalived-1.1.13
script "killall -0 haproxy"     # cheaper than pidof
interval 2                      # check every 2 seconds
weight 2                        # add 2 points of prio if OK
}
vrrp_instance VI_1 {
interface ens192
state MASTER
virtual_router_id 52
priority 101                    # 101 on master, 100 on backup
virtual_ipaddress {
10.0.0.54
}
track_script {
chk_haproxy
}
}

Keepalived config file for HAPROXY2

vrrp_script chk_haproxy {           # Requires keepalived-1.1.13
script "killall -0 haproxy"     # cheaper than pidof
interval 2                      # check every 2 seconds
weight 2                        # add 2 points of prio if OK
}
vrrp_instance VI_1 {
interface ens192
state MASTER
virtual_router_id 52
priority 100                    # 101 on master, 100 on backup
virtual_ipaddress {
10.0.0.54
}
track_script {
chk_haproxy
}
}

Configure your Firewall
The following IPTABLE rules should be running on both HAPROXY1 and HAPROXY2. Copy and paste the following rules in a text file and import them to your firewall table.

# Generated by iptables-save v1.4.21 on Thu Oct  8 15:18:59 2015
*filter
:INPUT ACCEPT [26988:2784395]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [35111:2263400]
-A INPUT -p tcp -m tcp --dport 80 -j ACCEPT
-A INPUT -p vrrp -j ACCEPT
-A INPUT -d 224.0.0.0/8 -j ACCEPT
COMMIT
# Completed on Thu Oct  8 15:18:59 2015
iptables-restore < /root/firewall.fw
Share Button

Centos 7 – Part 3 – HAProxy with SSL support combined with NGINX Load Balancing

In the previous post instructions were given on how to create a HAProxy combined with NGINX Load Balancing. However that particular setup did not support SSL. These instructions will implement SSL support at the HAProxy server. The NGINX servers receive SSL traffic but the connection between the NGINX servers and the Apache web servers are with out SSL. It should be noted the ability to login the actual IPs of visitors is not lost with the implementation of SSL.

The difference here is that in the haproxy config file we specify the use of SSL and we no longer use the “listen” section like we did with out the use of SSL in the previous post.

nginx-HA

Generate the SSL certificate.

openssl genrsa -out haproxy1.key  1024
openssl  req -new -key haproxy1.key  -out haproxy1.csr
openssl ca -policy policy_anything -in haproxy1.csr  -out haproxy1.crt
openssl x509 -req -days 365 -in  haproxy1.csr  -signkey haproxy1.key  -out haproxy1.cr
cat haproxy1.crt haproxy1.key |   tee haproxy1.pem

 

Configure HAPROXY

vi /etc/haproxy/haproxy.cfg
global
        log 127.0.0.1   local0
        log 127.0.0.1   local1 debug
        maxconn   45000 # Total Max Connections.
        daemon
        nbproc      1 # Number of processing cores.
defaults
        timeout server 86400000
        timeout connect 86400000
        timeout client 86400000
        timeout queue   1000s

frontend https_frontend
  bind 10.0.0.52:443 ssl crt /etc/ssl/haproxy1.sfentona.lol/haproxy1.pem
  mode http
  option httpclose
  option forwardfor
  reqadd X-Forwarded-Proto:\ https
  default_backend web_server

backend web_server
  mode http
  balance roundrobin
  cookie SERVERID insert indirect nocache
  server wordpressvirtip 10.0.0.44:80
Share Button

Centos 7 – Part 2 – HAProxy combined with NGINX Load Balancing.

In this previous post instructions were written on how to setup a Round Robin Load Balancer by using NGINX and a virtual IP that would pass requests to the Apache Web Servers.

In this post we will use the very same setup but we place a HAProxy server in front of the Virtual IP the NGINX servers created. This server will use the Round Robin protocol as well and it will pass the requests to the NGINX servers which will in return will pass the web requests to the Apache web servers. SSL is not yet implemented.

The IP address of the HAProxy is 10.0.0.52 and the IP address of the virtual IP we created is 10.0.0.44 with a DNS entry “wordpressvirtip”

nginx-HA

 

 

 

 

 

 

 

 

 

 

 

 

 

 

yum install haproxy
vi /etc/haproxy/haproxy.cfg
global
        log 127.0.0.1   local0
        log 127.0.0.1   local1 debug
        maxconn   45000 # Total Max Connections.
        daemon
        nbproc      1 # Number of processing cores.
defaults
        timeout server 86400000
        timeout connect 86400000
        timeout client 86400000
        timeout queue   1000s

# [HTTP Site Configuration]
listen  http_web 10.0.0.52:80
        mode http
        balance roundrobin  # Load Balancing algorithm
        option httpchk
        option forwardfor
        server wordpressvirtip 10.0.0.44:80 weight 1 maxconn 512 check
        #server server2 10.0.0.40:80 weight 1 maxconn 512 check

# [HTTPS Site Configuration]
#listen  https_web 192.168.10.10:443
#        mode tcp
#        balance source# Load Balancing algorithm
#        reqadd X-Forwarded-Proto:\ http
#        server server1 192.168.10.100:443 weight 1 maxconn 512 check
#        server server2 192.168.10.101:443 weight 1 maxconn 512 check
 system start haproxy
Share Button

Centos 7 – Part 1 – NGINX – Apache – Load Balancing High Availability

These instructions show how to setup a web Load Balancer by using two NGINX servers as the Load Balancers and two Apache servers.

2 – Centos 7  servers running NGINX will be used as Load Balancers.

2 – Cent0s 7 serves running Apache will be used to serve web pages via virtual hosts.

The NGINX servers will:
– Determine the appropriate destination service based on the method chosen; in this case it will be Round Robin, which is the default option.
– Will use KeepAlive in order to create a virtual IP address. The IP address will based on the already NICs of the Load Balancers.

The Apache servers will:
– Act as your normal every day web servers with virtual hosts.
– Will log the IP of the actual client.

nginx-HA

Setting up the Apache web Servers.

  • Create the directories where the content of you virtual hosts will be placed.
mkdir /etc/httpd/vhosts.d
mkdir -p /sites/wordpress/
chown -R apache:apache /sites/wordpress/
chmod 755 /sites/wordpress/

 

  • Instruct Apache to look into the directory you created for your virtual hosts.
vim /etc/httpd/conf.d/vhosts.conf
IncludeOptional vhosts.d/*.conf

 

  • Create the config file for the corresponding web site.
vim /etc/httpd/vhosts.d/wordpress.sfentona.lol.conf
ServerAdmin webmaster@dummy-host.example.com
DocumentRoot /sites/wordpress/
ServerName wordpress.sfentona.lol
ServerAlias www.wordpress.sfentona.lol

Directory "/sites/wordpress";
DirectoryIndex index.html index.php
Options FollowSymLinks
AllowOverride All
Require all granted

 

  • Set up log x-fowarded-for in order to get the the IP of the actual client who visited the site and not the IP of the load balancer in your logs.
vi /etc/httpd/conf/httpd.conf
LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined
LogFormat "%h %l %u %t \"%r\" %>s %b" common
LogFormat "%{X-Forwarded-For}i %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" forwarded

 

Configure NGINX as a load balancer (view lines 34-38)

vi /etc/nginx/nginx.conf
# For more information on configuration, see:
#   * Official English Documentation: http://nginx.org/en/docs/
#   * Official Russian Documentation: http://nginx.org/ru/docs/

user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;

events {
    worker_connections 1024;
}

http {
    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    access_log  /var/log/nginx/access.log  main;

    sendfile            on;
    tcp_nopush          on;
    tcp_nodelay         on;
    keepalive_timeout   65;
    types_hash_max_size 2048;

    include             /etc/nginx/mime.types;
    default_type        application/octet-stream;

    # Load modular configuration files from the /etc/nginx/conf.d directory.
    # See http://nginx.org/en/docs/ngx_core_module.html#include
    # for more information.
    include /etc/nginx/conf.d/*.conf;
    proxy_set_header        X-Real-IP $remote_addr;
    proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;
    upstream wordpress {
    server 10.0.0.42:80;
    server 10.0.0.43:80;
}


        server {
        listen       80;
        server_name  www.wordpress.sfentona.lol;

        # Load configuration files for the default server block.
        include /etc/nginx/default.d/*.conf;

        location / {
                proxy_pass http://wordpress;
        }

        error_page 404 /404.html;
            location = /40x.html {
        }

        error_page 500 502 503 504 /50x.html;
            location = /50x.html {
        }
    }
}

Allow NGINX to bind to a non-local shared ip

vi /etc/sysctl.conf
net.ipv4.ip_nonlocal_bind=1
sysctl -p

Set up your firewall in order for Multicast and VRRP to work correctly.

iptables -I INPUT -d 224.0.0.0/8 -j ACCEPT
iptables -I INPUT -p vrrp -j ACCEPT

 

Configure Keep Alive.

vi/etc/keepalived/keepalived.conf

This is for the Master Load Banacer LB1

 notification_email {
     sysadmin@mydomain.com
     support@mydomain.com
   }
   notification_email_from lb1@mydomain.com
   smtp_server localhost
   smtp_connect_timeout 30
}

vrrp_instance VI_1 {
    state MASTER
    interface ens192
    virtual_router_id 51
    priority 101
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        10.0.0.44
    }
}

This is for the Master Load Banacer LB2. Priority on the slave is a lower number.

global_defs {
   notification_email {
     sysadmin@mydomain.com
     support@mydomain.com
   }
   notification_email_from lb2@mydomain.com
   smtp_server localhost
   smtp_connect_timeout 30
}

vrrp_instance VI_1 {
    state MASTER
    interface ens192
    virtual_router_id 51
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        10.0.0.44
    }
}
Share Button

Centos6.5-Win2012R2 – Setup Windows as your Master DNS and Bind as your Slave DNS.

This tutorial show how to setup Windows 2012-R2 as a Master DNS and how to set up Centos 6 as a slave DNS.

PRIMARY DNS NAME AND IP: AD1.SFENTONA.LOL  / 10.0.0.6
SLAVE   DNS NAME AND IP: DNS1.SFENTONA.LOL /10.0.0.10

Centos DNS CONFIG STEPS

———————————————————————————————————-
The following config files have been used in order to get DNS services up and running in Centos 6.

vi /etc/resolv.conf
nameserver 127.0.0.1
search sfentona.lol
vi /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=dns1.sfentona.lol
vi /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE=eth0
HWADDR=00:0C:29:CA:90:ED
TYPE=Ethernet
UUID=49076518-17fb-4416-be14-de64aa36843a
ONBOOT=yes
NM_CONTROLLED=no
BOOTPROTO=static
IPADDR=10.0.0.10
NETMASK=255.255.255.192
GATEWAY=10.0.0.1
DNS1=127.0.0.1
DOMAIN="sfentona.lol"
vi /etc/named.conf

Under Options you will have to specify the IP address of your Centos DNS server and from which network you will accept queries.

listen-on port 53 { 127.0.0.1; 10.0.0.10; };
listen-on-v6 port 53 { ::1; };
directory "/var/named";
dump-file "/var/named/data/cache_dump.db";
statistics-file "/var/named/data/named_stats.txt";
memstatistics-file "/var/named/data/named_mem_stats.txt";
allow-query { localhost; 10.0.0.10/26; };
//allow-transfer { 10.0.0.0/26; };
recursion yes;

Under Zones you will have to create your forward lookup and reverse lookup zones. Between the sections LOGGING and ZONE include the following lines. We are essentially telling to our Centos DNS service that it is of type slave and the IP of the master DNS. We are also telling where the location of the zone files will be located

/var/named/slaves

sfentona.lol.zone

zone "sfentona.lol" IN {
type slave;
masters { 10.0.0.6; };
allow-query { any; };
file "slaves/sfentona.lol.zone";
};

sfentona.lol.rr.zone

zone "0.0.10.in-addr.arpa" IN {
type slave;
masters { 10.0.0.6; };
allow-query { any; };
file "slaves/sfentona.lol.rr.zone";
};

 

 

Windows DNS CONFIG STEPS

———————————————————————————————————-

  • On your maind DNS properties settings tree check “Enable Bind Secondaries”
  • You will have enter as Name Server your Linux Server for both your Forward and Reverse Lookup zones.
  • On your DNS Zone (in this case sfentona.lol) under properties settings enable “Zone Transfer”. Specify your slave DNS or you can opt to update all available DNS servers. Make to do this for both your Forward and Reverse lookup zones for your Domain.
Share Button

Python – Install Python 2.7 alongside 2.6 in Centos 6

These instructions were taken from toomuchdata.com.  The instructions on the site show how to install Python 3 as well.  For my own purposes however, I only copied and pasted the instructions for Python 2.7. The instructions were tested on a Centos 6 machine and they worked with no issues right off the gate.

 

yum groupinstall "Development tools"
yum install zlib-devel bzip2-devel openssl-devel ncurses-devel sqlite-devel readline-devel tk-devel gdbm-devel db4-devel libpcap-devel xz-devel
wget http://python.org/ftp/python/2.7.6/Python-2.7.6.tar.xz
tar xf Python-2.7.6.tar.xz
cd Python-2.7.6
./configure --prefix=/usr/local --enable-unicode=ucs4 --enable-shared LDFLAGS="-Wl,-rpath /usr/local/lib"
make && make altinstall
cd -
wget https://bitbucket.org/pypa/setuptools/raw/bootstrap/ez_setup.py
python2.7 ez_setup.py
easy_install-2.7 pip

In case you receive an error regarding ZLIB after you try to execute the east_install command, execute the following command.

cp /usr/lib64/python2.6/lib-dynload/zlibmodule.so /usr/local/lib/python2.7/lib-dynload/zlibmodule.so
Share Button

Python /PostgresqSQL 9.4 – Server Performance Data Capture V.2

In this second version, the scripts which capture data for the servers are using classes. In addition the the script that captures dynamic data from the remote servers captures additional data. In specific it captures RAM, HD usage in raw numbers and not only “humanized” format. The humanized fields were not taken out. As a result the appropriate tables had to be modified, which means the schema has changed as well. Also, the database and the scripts are now being installed via puppet. The Puppet manifests are far from polished. They need more work but they do work.

These scripts gather static and dynamic information from servers and insert that data into a PostgresSQL database. The static information is information that unless a major upgrade takes place it never changes. They Dynamic data is performance data of the servers. The purpose of the static data is to be able to query for dynamic information which being inserted to the database every X amount of minutes via cron.

These scripts work only on Linux/Unix based machines.

The PUPPET modules used to install the Database and the scripts are located here

The static information for the remote servers are as follows :

hostname
iface
ipaddress
OS
OSRel
OSKern
total_M
brand
Hz
cores
arch

The dynamic information for the remote servers are as follows :

hostname
iface
ipaddress
total_ram_hum
used_ram_hum
total_ram_raw
used_ram_raw
used_ram_perc
total_HD_hum
used_HD_hum
total_HD_raw
used_HD_raw
used_HD_perc
cpu_use_perc
swap_used_hum
swap_total_hum
swap_perc
swap_used_raw
swap_total_raw

The static.py script will need to be run only one time on the remote servers or when a major upgrade occurs that might change configuration regarding RAM, Partitions, IP, Operating System (even an upgrade), CPU, NIC replacement.

The dynamic.py script will be run on the remote servers via cron. It is the script that captures information which is being constantly changed, like memory, storage, swap usage. All this data is sent for insertion to the remote database. The script executes it self via cron and then sent to be inserted into a PostgreSQL database.

In both scripts data is entered into a dictionary and then a connection to the database is created in order to insert the data.

The Static.py script

#!/usr/bin/python
import psutil
import os
import math
import platform
import subprocess
import socket
import psycopg2
import netifaces as ni
import humanize
from cpuinfo import cpuinfo


class Static():
    def __init__(self):
        #NIC INFO
        self.hostname   = socket.gethostname()
        self.iface      = ni.interfaces()[1]
        self.ipaddress  = ni.ifaddresses(self.iface)[2][0]['addr']
        #---OS INFO
        #For Linux (RH-Debian) Operating Systems
        self.distro  = platform.dist()[0]
        self.release = platform.dist()[1]
        self.kernel  = platform.release()
        #For Windows Operating Systems
        self.osinfo_2_os    = platform.uname()[0]
        self.osinfo_2_ver   = platform.uname()[2]
        self.osinfo_2_rel   = platform.uname()[3]
        #----RAM INFO
        raw_totalM = psutil.virtual_memory().total
        self.total_M    = humanize.naturalsize(raw_totalM)
        #----CPU INFO
        self.info       = cpuinfo.get_cpu_info()
        self.brand      = self.info['brand']
        self.Hz         = self.info['hz_advertised']
        self.cores        = self.info['count']
        self.arch       = self.info['bits']

    def get_OS_make(self):
       if platform.system()  =="Linux":
           return self.distro, self.release, self.kernel
       elif platform.system()     =="Windows":
           return self.osinfo_2_os, self.osinfo_2_ver, self.osinfo_2_rel

info = Static()



hostname  = info.hostname
iface     = info.iface
ipaddress = info.ipaddress
OS        = info.get_OS_make()[0]
OSRel     = info.get_OS_make()[1]
OSKern    = info.get_OS_make()[2]
total_M   = info.total_M
brand     = info.brand
Hz        = info.Hz
cores     = info.cores
arch      = info.arch




#Create the Database PostgreSQL 9.4 connection.
conn = psycopg2.connect("host='172.31.98.161' dbname='servers' user='seeker'")
cur = conn.cursor() #Create the cursor
#Create a Dictionary to pass the value of each function.
server_info = {'hostname': hostname, 'iface':iface, 'ipaddress': ipaddress, 'OS': OS, 'OSRel': OSRel, 'OSKern': OSKern, 'total_M': total_M, 'brand': brand, 'Hz':Hz, 'cores': cores, 'arch': arch}
cur.execute("INSERT INTO servers(hostname, iface, ipaddress, OS, OSRel, OSKern, total_M, brand, Hz, cores, arch) VALUES ('%s','%s','%s','%s','%s','%s','%s','%s','%s','%s','%s')" % (hostname, iface, ipaddress, OS, OSRel, OSKern, total_M, brand, Hz, cores, arch))
#If this is not present the changes will not get commited.
conn.commit()

The Dynamic.py srcipt

#THIS FILE IS MANAGED BY PUPPET
#!/usr/bin/python
import psutil
import os
import math
import platform
import subprocess
import socket
import psycopg2
import netifaces as ni
import humanize
from cpuinfo import cpuinfo

class Dynamic():
    def __init__(self):
        #NIC INFO
        self.hostname   = socket.gethostname()
        self.iface      = ni.interfaces()[1]
        self.ipaddress  = ni.ifaddresses(self.iface)[2][0]['addr']
        #RAM USAGE INFO-------------------------------------------------------------------
        self.total_ram_hum       = humanize.naturalsize(psutil.virtual_memory().total)
        self.used_ram_hum        = humanize.naturalsize(psutil.virtual_memory().used)
        #---------Raw info
        self.total_ram_raw       = (psutil.virtual_memory().total)
        self.used_ram_raw        = (psutil.virtual_memory().used)
        self.used_ram_perc       = psutil.virtual_memory().percent
        #HD USAGE INFO-------------------------------------------------------------------
        self.total_HD_hum        = humanize.naturalsize(psutil.disk_usage('/').total)
        self.used_HD_hum         = humanize.naturalsize(psutil.disk_usage('/').used)
        #---------Raw info
        self.total_HD_raw        =(psutil.disk_usage('/').total)
        self.used_HD_raw         =(psutil.disk_usage('/').used)
        self.used_HD_perc        = psutil.disk_usage('/').percent
        #CPU USAGE INFO-------------------------------------------------------------------
        self.cpu_use_perc        = psutil.cpu_percent()
        #SWAP USAGE INFO-------------------------------------------------------------------
        self.swap_used_hum           = humanize.naturalsize(psutil.swap_memory().used)
        self.swap_total_hum          = humanize.naturalsize(psutil.swap_memory().total)
        self.swap_perc               = psutil.swap_memory()[3]
        #---------Raw info
        self.swap_used_raw           = (psutil.swap_memory().used)
        self.swap_total_raw          = (psutil.swap_memory().total)
    def export_to_csv(self):
        print self.hostname
info = Dynamic()

hostname            = info.hostname
iface               = info.iface
ipaddress           = info.ipaddress
total_ram_hum       = info.total_ram_hum
used_ram_hum        = info.used_ram_hum
total_ram_raw       = info.total_ram_raw
used_ram_raw        = info.used_ram_raw
used_ram_perc       = info.used_ram_perc
total_HD_hum        = info.total_HD_hum
used_HD_hum         = info.used_HD_hum
total_HD_raw        = info.total_HD_raw
used_HD_raw         = info.used_HD_raw
used_HD_perc        = info.used_HD_perc
cpu_use_perc        = info.cpu_use_perc
swap_used_hum       = info.swap_used_hum
swap_total_hum      = info.swap_total_hum
swap_perc           = info.swap_perc
swap_used_raw       = info.swap_used_raw
swap_total_raw      = info.swap_total_raw


conn = psycopg2.connect("host='172.31.98.161' dbname='servers' user='seeker'")
cur = conn.cursor() #Create the cursor
#Create a Dictionary to pass the value of each function.
server_info = {'hostname':hostname, 'iface': iface,'ipaddress': ipaddress, 'total_ram_hum': total_ram_hum, 'used_ram_hum': used_ram_hum, 'total_ram_raw': total_ram_raw, 'used_ram_raw':used_ram_raw,'used_ram_perc': used_ram_perc, 'HD_hum': total_HD_hum, 'used_HD_hum': used_HD_hum, 'total_HD_raw': total_HD_raw, 'used_HD_raw':used_HD_raw, 'used_HD_perc': used_HD_perc, 'cpu_use_perc': cpu_use_perc,'swap_used_hum':swap_used_hum, 'swap_total_hum': swap_total_hum, 'swap_perc': swap_perc, 'swap_used_raw': swap_used_raw, 'swap_total_raw': swap_total_raw}
cur.execute("INSERT INTO SERVER_PERF(hostname, iface, ipaddress, total_ram_hum, used_ram_hum, total_ram_raw, used_ram_raw, used_ram_perc, total_HD_hum, used_HD_hum, total_HD_raw,used_HD_raw,used_HD_perc, cpu_use_perc,swap_used_hum, swap_total_hum, swap_perc, swap_used_raw, swap_total_raw) VALUES ('%s','%s','%s','%s','%s','%s','%s','%s','%s','%s','%s','%s','%s','%s','%s','%s','%s','%s','%s')" % (hostname, iface, ipaddress, total_ram_hum, used_ram_hum, total_ram_raw, used_ram_raw,used_ram_perc, total_HD_hum, used_HD_hum, total_HD_raw, used_HD_raw, used_HD_perc, cpu_use_perc, swap_used_hum, swap_total_hum, swap_perc, swap_used_raw, swap_total_raw))
#If this is not present the changes will not get commited.
conn.commit()

The new Database schema is as follows:

pg_dump -U seeker -d servers -s -h 172.31.98.161 > servers_db_schema

--
CREATE DATABASE SERVERS;
CREATE ROLE seeker WITH PASSWORD 'Password!';
ALTER DATABASE SERVERS OWNER TO seeker;
ALTER ROLE seeker WITH LOGIN;
GRANT ALL PRIVILEGES ON DATABASE SERVERS to seeker;
-- PostgreSQL database dump
--

SET statement_timeout = 0;
SET lock_timeout = 0;
SET client_encoding = 'UTF8';
SET standard_conforming_strings = on;
SET check_function_bodies = false;
SET client_min_messages = warning;

--
-- Name: plpgsql; Type: EXTENSION; Schema: -; Owner:
--

CREATE EXTENSION IF NOT EXISTS plpgsql WITH SCHEMA pg_catalog;

--
-- Name: EXTENSION plpgsql; Type: COMMENT; Schema: -; Owner:
--

COMMENT ON EXTENSION plpgsql IS 'PL/pgSQL procedural language';

SET search_path = public, pg_catalog;

SET default_tablespace = '';

SET default_with_oids = false;

--
-- Name: server_perf; Type: TABLE; Schema: public; Owner: seeker; Tablespace:
--

CREATE TABLE server_perf (
hostname text NOT NULL,
iface text,
ipaddress inet NOT NULL,
total_ram_hum text,
used_ram_hum text,
total_ram_raw numeric(30,2),
used_ram_raw numeric(30,2),
used_ram_perc text,
total_hd_hum text,
used_hd_hum text,
total_hd_raw numeric(30,2),
used_hd_raw numeric(30,2),
used_hd_perc text,
cpu_use_perc text,
swap_used_hum text,
swap_total_hum text,
swap_perc text,
swap_used_raw numeric(30,2),
swap_total_raw numeric(30,2),
time_captured timestamp without time zone DEFAULT now()
);

ALTER TABLE server_perf OWNER TO seeker;

--
-- Name: servers; Type: TABLE; Schema: public; Owner: seeker; Tablespace:
--

CREATE TABLE servers (
hostname text NOT NULL,
iface text,
ipaddress inet NOT NULL,
os text,
osrel text,
oskern text,
total_m text,
brand text,
hz text,
cores numeric(4,1),
arch text
);

ALTER TABLE servers OWNER TO seeker;

--
-- Name: pk_hostname; Type: CONSTRAINT; Schema: public; Owner: seeker; Tablespace:
--

ALTER TABLE ONLY servers
ADD CONSTRAINT pk_hostname PRIMARY KEY (hostname);

--
-- Name: server_perf_hostname_fkey; Type: FK CONSTRAINT; Schema: public; Owner: seeker
--

ALTER TABLE ONLY server_perf
ADD CONSTRAINT server_perf_hostname_fkey FOREIGN KEY (hostname) REFERENCES servers(hostname);

--
-- Name: public; Type: ACL; Schema: -; Owner: postgres
--

REVOKE ALL ON SCHEMA public FROM PUBLIC;
REVOKE ALL ON SCHEMA public FROM postgres;
GRANT ALL ON SCHEMA public TO postgres;
GRANT ALL ON SCHEMA public TO PUBLIC;

--
-- PostgreSQL database dump complete
--
Share Button