A note to visitors

This web site is used for me to store information/skills I acquire in order to retain it.  These instructions are not meant for productions environments. They serve as a mere blueprint on how to do certain things.

Thank you.

Share Button

Ansible – Enable and Restart a service

In this case “syslog-ng” has been used.

- hosts: [targethosts]

  become: yes
  become_method: sudo

  tasks:
    - name: Gather Executing User Name
      command: whoami
      always_run: yes
      register: executing_user_id
      delegate_to: 127.0.0.1
    - name: Restart syslog-ng service
      service: name=syslog-ng  state=restarted
    - name: Enable service
      service: name=syslog-ng  enabled=yes

    - name: Start writing to Ansible Log file
      lineinfile: dest="/var/log/ansible_history" line="TI-4003 syslog-ng  -  DATE {{ ansible_date_time.iso8601 }} - USER {{ executing_user_id.stdout }}" create=yes state=present insertafter=EOF
Share Button

Python – Put name of files in a list and also generate dynamically variables associated with each file

Python – Put name of files in a list and also generate dynamically variables associated with each file

files_list = []  #Putting all files in a list
for files in glob.glob("*"):
     files_list.append(files)

vars_dict = {} #creating variables for each file dynamically with a dictionary
for elem_files in range(len(files_list)):
    vars_dict[elem_files] = files_list[elem_files]
Share Button

Python – Merge two lists, eliminate duplicates, detect items that do not overlap between lists

Merge two lists, eliminate duplicates, detect items that do not overlap between lists

 f_master_list    = open('master-list','r')
f_second_list  = open('my_list','r')

master_list  = []
second_list = []

for line in f_master_list:
    master_list.append(line.strip())

for line in f_second_list:
    second_list.append(line.strip())

#print master_list
#print second_list_list
print ""
print "-----------------------------------------------"
print "Below items are unique to each list."
print "-----------------------------------------------"
print  set(master_list).symmetric_difference(second_list)
print ""
print ""
print "---------------------------------------------------------------"
print "Merging both lists and removing duplicates"
print "---------------------------------------------------------------"
results = list(set(master_list+second_list))

results.sort()
master_list.sort()
second_list.sort()


print "master_list has   ",   len(master_list), "elements"
print "second_list has ",     len(second_list), "elements"
print "The updated list has", len(results), "elements"
print results

for items in results:
    print items
Share Button

Python 2.7 – Generate GET requests from web server

This script generates simple GET requests from a web server. Threading is used in order to generate multiple GET requests at the same time. Note that this script cannot create significant stress on a Web Server.

import requests
import threading


def get_thrasher():

    req_get  = requests.get('http://192.168.56.101/')

threads = []

for counter_1 in range(10):
    thrd = threading.Thread(target=get_thrasher())
    thrd.start()
    threads.append(thrd)

for counter_2 in threads:
    counter_2.join()
Share Button

Python 2.7 – Find Location of IP with the geoip2 database.

The script collects the IP addresses from the apache log file. It then uses the geoip2 database in order to find the geographical location of the IP. More information for the geoip2 database can be found at http://dev.maxmind.com/geoip/geoip2/downloadable/

The module used to capture the IPs from the apache log file requires a CustomLog format. It needs to be specified in the apache config file and in the script. The string used is

("%h <<%P>> %t %Dus \"%r\" %>s %b  \"%{Referer}i\" \"%{User-Agent}i\" %l %u"
import geoip2.database
import apache_log_parser

#specify the log file we will capture the IP from.
dir     = "/var/log/apache2/"
file    = "access.log"
apache_logfile = dir+file


#Create a connection to the mmdb file with all the IP geo-location data.
reader = geoip2.database.Reader("GeoLite2-City.mmdb")

#In case we cannot open the file throw an error message
try:
        f_open = open(apache_logfile, "rb")
except Exception as e:
        print e

#As required by the apache_log_parser module
line_parser = apache_log_parser.make_parser("%h <<%P>> %t %Dus \"%r\" %>s %b  \"%{Referer}i\" \"%{User-Agent}i\" %l %u")

#This is the list we will put in IPs.
ip_list = []

for loop in f_open:  #We are going through the file specified
    log_line_data = line_parser(loop) #We are using the apache parser as specified above
    remote_ip  =  log_line_data['remote_host'] #The apache parser returns a dictionary. We just want the remote_host key.
    for ip  in remote_ip:
        ip_list.append(remote_ip)  #We are appending the IPs to the list we created above.


unique_ip_list = set(ip_list)  # We delete the duplicate IP entries from our list.
for ips in unique_ip_list:
        try: #In case the IP is not recognized by the geoip2 database
                locate_ip = reader.city(ips) # we are using the geoip2 module here with the IPs from our list
                print ips, locate_ip.country.name
        except Exception as e:
                print e

Share Button

Types of Replication general info

Asynchronous

 Data is replicated after it has been commited. Data loss can exist
if the master server crushes before the changes are commited by the remote server.

Synchronous

System has to ensure the data written by the transaction will be present
to at least two servers when the transaction commits. Confirmation
from the remote server is needed and this creates an overhead.


 

Single Master Replication

The master server replicates the data to the slave server.
The writes go to the Master server and these changes are disturbuted
to the slaves servers.

Multi Master

In this server writes are allowed to all the servers in the cluster.
This means alot of writes can go to many nodes at the same time.


 

Logical Replication

This replication distributes the changes at a logical level. It
does not concern it self with physical layout of the data structure
of the database it self.

Physical Replication

Is the type of replication where data are moved as is. The replication
of data is done at a binary level.

Share Button

PostgreSQL 9.4 – Database dumps and Restores

Create a Compressed dump file of a Remote Database and Restore it

  • Create a highly compressed data dump
pg_dump -h 10.0.0.19 -U seeker -C -c -v -F c -d servers -f data_dump.dump
  • Restore the data dump you created
pg_restore -C -v -U seeker -d servers -h 10.0.0.32  data_dump.dump

Create SQL data dumps and Restore them

  • Create a SQL Schema only dump
pg_dump -U  seeker  -d servers   -h 10.0.0.32 --schema-only -f schema_only.sql
  • Create a SQL Data only dump
pg_dump -U  seeker  -d servers   -h 10.0.0.32 --data-only -f data_only.sql
  • Restore SQL data dump using psql
\i "file_name".sql
     Important Notes about Backing up and Restoring

The database, schema and the role must be first created.

Back up and Restore specific tables

  • Backup a specific table
 pg_dump -Fc -v -U seeker -d servers -t a_table > /tmp/a_table.pgdump
  • View the tables of a data dump
pg_restore -l  data_dump_file.dump
  • Extract a table from a dump file
pg_restore -U seeker --data-only --table=a_table database.pgdump > a_table.pg
  • Upload the data of this table to a table in the database
 psql -U user_name -d database_name < name_of_your_dump.pg
Share Button

pg_dump incompatibility issue between servers

You may have to do pg_dump from server that has pg_dump version 9.4 to remote server that has pg_dump version 9.3. An error like this may occur.

 pg_dump: server version: 9.4.4; pg_dump version: 9.3.10 

You will have to install the 9.4 version of postgres and then adjust the soft link /usr/bin/pg_dump to point to /usr/pgsql-9.4/bin/pg_dump

sudo ln -s /usr/pgsql-9.4/bin/pg_dump  /usr/bin/pg_dump
Share Button

Postgresql 9.4: Create a template database.

In postgres template1 is the default source database name for CREATE DATABASE. The template0 database is used after we specify it. More information at the official documentation located here.

The template0 database we create a user database that "contains none
of the site-local additions in template1. This is particularly handy
when restoring a pg_dump dump: the dump script should be restored
in a virgin database to ensure that one recreates the correct
contents of the dumped database. Another common reason for
copying template0 instead of template1 is that new encoding and
locale settings can be specified when copying template0, whereas a
copy of template1 must use the same settings it does. This is because
template1 might contain encoding-specific or locale-specific data,
while template0 is known not to."

You can create your own custom Template to be used in the creation of databases. For example, assume you have the database with the name ‘servers’. You can create a template out of that database with the following steps.

Make sure to kill all connections from the database and disallow any new connections.

update pg_database 
set datallowconn = false 
where datname = 'servers'

Kill any other live connections

select pg_terminate_backend(pid)
from pg_stat_activity
where datname = 'servers'

Create your template

create database servers_2 template servers;

Make sure to activate the template flag in your template database in order to avoid having the template being altered.

update servers_2 
set datistemplate = TRUE 
where datname = 'servers';

Remember to allow connections to the database you used to create the template.

update pg_database 
set datallowconn = true 
where datname = 'servers'
Share Button

Postgresql 9.4: System tables ‘pg_stat_activity’ and ‘pg_database’, Cancel/Kill/Stop connections queries to a database.

You can view the status of connections to the postgresql database and the status of the queries. This info can be sorted by different fields.

      Column      | Modifiers
------------------+-----------
 datid            |
 datname          | The database name
 pid              | The pid number
 usesysid         |
 usename          | The username
 application_name |
 client_addr      | The IP address connected
 client_hostname  |
 client_port      | The port used
 backend_start    | The time at which the database was accessed by the client
 xact_start       | The time at which the transaction started to run. Should always have occurred before the query_start time 
 query_start      | The time the actual query started 
 state_change     |
 waiting          |
 state            |
 backend_xid      |
 backend_xmin     |
 query            | The actual query

View queries for specific user

select 
datname, pid, usename, client_addr, query_start, query
from pg_stat_activity where usename = 'seeker';
datname     | servers
pid         | 21694
usename     | seeker
client_addr | 10.0.0.19
query_start | 2015-10-16 15:59:08.234508-04
query       | INSERT INTO IP_DNS_SCAN(hostname, ipaddress, remote_ip, pingStatus, hostDNScheck) VALUES ('superman.sfentona.lol','10.0.
0.19','10.0.0.54','UP','wordpress.sfentona.lol.')

You can cancel or Kill queries by using pg_terminate_backend or pg_cancel_backend

Cancel or Kill the queries of a specific pid

select pg_terminate_backend(pid) 
from pg_stat_activity 
where pid = 24297;

Cancel or Kill the queries of a specific user

select pg_terminate_backend(pid)
from pg_stat_activity 
where usename = 'gmastrokostas';

Kill all queries besides your own

select pg_terminate_backend(pid)
from pg_stat_activity
where usename <> 'gmastrokostas';

Disallow all new connections to a specific Database

update pg_database 
set datallowconn = FALSE 
wnere datname = 'name_your_database';
Share Button