Types of Replication general info

Asynchronous

 Data is replicated after it has been commited. Data loss can exist
if the master server crushes before the changes are commited by the remote server.

Synchronous

System has to ensure the data written by the transaction will be present
to at least two servers when the transaction commits. Confirmation
from the remote server is needed and this creates an overhead.


 

Single Master Replication

The master server replicates the data to the slave server.
The writes go to the Master server and these changes are disturbuted
to the slaves servers.

Multi Master

In this server writes are allowed to all the servers in the cluster.
This means alot of writes can go to many nodes at the same time.


 

Logical Replication

This replication distributes the changes at a logical level. It
does not concern it self with physical layout of the data structure
of the database it self.

Physical Replication

Is the type of replication where data are moved as is. The replication
of data is done at a binary level.

Share Button

PostgreSQL 9.4 – Database dumps and Restores

Create a Compressed dump file of a Remote Database and Restore it

  • Create a highly compressed data dump
pg_dump -h 10.0.0.19 -U seeker -C -c -v -F c -d servers -f data_dump.dump
  • Restore the data dump you created
pg_restore -C -v -U seeker -d servers -h 10.0.0.32  data_dump.dump

Create SQL data dumps and Restore them

  • Create a SQL Schema only dump
pg_dump -U  seeker  -d servers   -h 10.0.0.32 --schema-only -f schema_only.sql
  • Create a SQL Data only dump
pg_dump -U  seeker  -d servers   -h 10.0.0.32 --data-only -f data_only.sql
  • Restore SQL data dump using psql
\i "file_name".sql
     Important Notes about Backing up and Restoring

The database, schema and the role must be first created.

Back up and Restore specific tables

  • Backup a specific table
 pg_dump -Fc -v -U seeker -d servers -t a_table > /tmp/a_table.pgdump
  • View the tables of a data dump
pg_restore -l  data_dump_file.dump
  • Extract a table from a dump file
pg_restore -U seeker --data-only --table=a_table database.pgdump > a_table.pg
  • Upload the data of this table to a table in the database
 psql -U user_name -d database_name < name_of_your_dump.pg
Share Button

pg_dump incompatibility issue between servers

You may have to do pg_dump from server that has pg_dump version 9.4 to remote server that has pg_dump version 9.3. An error like this may occur.

 pg_dump: server version: 9.4.4; pg_dump version: 9.3.10 

You will have to install the 9.4 version of postgres and then adjust the soft link /usr/bin/pg_dump to point to /usr/pgsql-9.4/bin/pg_dump

sudo ln -s /usr/pgsql-9.4/bin/pg_dump  /usr/bin/pg_dump
Share Button

Postgresql 9.4: Create a template database.

In postgres template1 is the default source database name for CREATE DATABASE. The template0 database is used after we specify it. More information at the official documentation located here.

The template0 database we create a user database that "contains none
of the site-local additions in template1. This is particularly handy
when restoring a pg_dump dump: the dump script should be restored
in a virgin database to ensure that one recreates the correct
contents of the dumped database. Another common reason for
copying template0 instead of template1 is that new encoding and
locale settings can be specified when copying template0, whereas a
copy of template1 must use the same settings it does. This is because
template1 might contain encoding-specific or locale-specific data,
while template0 is known not to."

You can create your own custom Template to be used in the creation of databases. For example, assume you have the database with the name ‘servers’. You can create a template out of that database with the following steps.

Make sure to kill all connections from the database and disallow any new connections.

update pg_database 
set datallowconn = false 
where datname = 'servers'

Kill any other live connections

select pg_terminate_backend(pid)
from pg_stat_activity
where datname = 'servers'

Create your template

create database servers_2 template servers;

Make sure to activate the template flag in your template database in order to avoid having the template being altered.

update servers_2 
set datistemplate = TRUE 
where datname = 'servers';

Remember to allow connections to the database you used to create the template.

update pg_database 
set datallowconn = true 
where datname = 'servers'
Share Button

Postgresql 9.4: System tables ‘pg_stat_activity’ and ‘pg_database’, Cancel/Kill/Stop connections queries to a database.

You can view the status of connections to the postgresql database and the status of the queries. This info can be sorted by different fields.

      Column      | Modifiers
------------------+-----------
 datid            |
 datname          | The database name
 pid              | The pid number
 usesysid         |
 usename          | The username
 application_name |
 client_addr      | The IP address connected
 client_hostname  |
 client_port      | The port used
 backend_start    | The time at which the database was accessed by the client
 xact_start       | The time at which the transaction started to run. Should always have occurred before the query_start time 
 query_start      | The time the actual query started 
 state_change     |
 waiting          |
 state            |
 backend_xid      |
 backend_xmin     |
 query            | The actual query

View queries for specific user

select 
datname, pid, usename, client_addr, query_start, query
from pg_stat_activity where usename = 'seeker';
datname     | servers
pid         | 21694
usename     | seeker
client_addr | 10.0.0.19
query_start | 2015-10-16 15:59:08.234508-04
query       | INSERT INTO IP_DNS_SCAN(hostname, ipaddress, remote_ip, pingStatus, hostDNScheck) VALUES ('superman.sfentona.lol','10.0.
0.19','10.0.0.54','UP','wordpress.sfentona.lol.')

You can cancel or Kill queries by using pg_terminate_backend or pg_cancel_backend

Cancel or Kill the queries of a specific pid

select pg_terminate_backend(pid) 
from pg_stat_activity 
where pid = 24297;

Cancel or Kill the queries of a specific user

select pg_terminate_backend(pid)
from pg_stat_activity 
where usename = 'gmastrokostas';

Kill all queries besides your own

select pg_terminate_backend(pid)
from pg_stat_activity
where usename <> 'gmastrokostas';

Disallow all new connections to a specific Database

update pg_database 
set datallowconn = FALSE 
wnere datname = 'name_your_database';
Share Button

PostgreSQL 9.4: System Table – ‘pg_settings” – Queries that display Postgresql configuration settings

Obtaining Configuration settings.

Get Version of Database

select version();

Show Data directory

show data_directory;

Show config files

select name, setting 
from pg_settings
where category = 'File Locations';

 


Query your postgresql.conf file via pg_settings

name      - Name of Setting
context   - Does it require a hard or soft restart (postgres yes - user no)
unit      - MBs, KBs etc
setting   - Current value
boot_val  - Default setting
reset_val - New value once posgres service is restarted

Example

select 
name, context, unit, setting, boot_val, reset_val
from pg_settings
where name in 
('listen_addresses', 'max_connections', 'shared_buffers');
       name       |  context   | unit | setting | boot_val  | reset_val
------------------+------------+------+---------+-----------+-----------
 listen_addresses | postmaster |      | *       | localhost | *
 max_connections  | postmaster |      | 100     | 100       | 100
 shared_buffers   | postmaster | 8kB  | 16384   | 1024      | 16384
(3 rows)

All pg_settings fields you can query
All pg_settings fields

name | allow_system_table_mods
name | application_name
name | archive_command
name | archive_mode
name | archive_timeout
name | array_nulls
name | authentication_timeout
name | autovacuum
name | autovacuum_analyze_scale_factor
name | autovacuum_analyze_threshold
name | autovacuum_freeze_max_age
name | autovacuum_max_workers
name | autovacuum_multixact_freeze_max_age
name | autovacuum_naptime
name | autovacuum_vacuum_cost_delay
name | autovacuum_vacuum_cost_limit
name | autovacuum_vacuum_scale_factor
name | autovacuum_vacuum_threshold
name | autovacuum_work_mem
name | backslash_quote
name | bgwriter_delay
name | bgwriter_lru_maxpages
name | bgwriter_lru_multiplier
name | block_size
name | bonjour
name | bonjour_name
name | bytea_output
name | check_function_bodies
name | checkpoint_completion_target
name | checkpoint_segments
name | checkpoint_timeout
name | checkpoint_warning
name | client_encoding
name | client_min_messages
name | commit_delay
name | commit_siblings
name | config_file
name | constraint_exclusion
name | cpu_index_tuple_cost
name | cpu_operator_cost
name | cpu_tuple_cost
name | cursor_tuple_fraction
name | data_checksums
name | data_directory
name | DateStyle
name | db_user_namespace
name | deadlock_timeout
name | debug_assertions
name | debug_pretty_print
name | debug_print_parse
name | debug_print_plan
name | debug_print_rewritten
name | default_statistics_target
name | default_tablespace
name | default_text_search_config
name | default_transaction_deferrable
name | default_transaction_isolation
name | default_transaction_read_only
name | default_with_oids
name | dynamic_library_path
name | dynamic_shared_memory_type
name | effective_cache_size
name | effective_io_concurrency
name | enable_bitmapscan
name | enable_hashagg
name | enable_hashjoin
name | enable_indexonlyscan
name | enable_indexscan
name | enable_material
name | enable_mergejoin
name | enable_nestloop
name | enable_seqscan
name | enable_sort
name | enable_tidscan
name | escape_string_warning
name | event_source
name | exit_on_error
name | external_pid_file
name | extra_float_digits
name | from_collapse_limit
name | fsync
name | full_page_writes
name | geqo
name | geqo_effort
name | geqo_generations
name | geqo_pool_size
name | geqo_seed
name | geqo_selection_bias
name | geqo_threshold
name | gin_fuzzy_search_limit
name | hba_file
name | hot_standby
name | hot_standby_feedback
name | huge_pages
name | ident_file
name | ignore_checksum_failure
name | ignore_system_indexes
name | integer_datetimes
name | IntervalStyle
name | join_collapse_limit
name | krb_caseins_users
name | krb_server_keyfile
name | lc_collate
name | lc_ctype
name | lc_messages
name | lc_monetary
name | lc_numeric
name | lc_time
name | listen_addresses
name | lo_compat_privileges
name | local_preload_libraries
name | lock_timeout
name | log_autovacuum_min_duration
name | log_checkpoints
name | log_connections
name | log_destination
name | log_directory
name | log_disconnections
name | log_duration
name | log_error_verbosity
name | log_executor_stats
name | log_file_mode
name | log_filename
name | log_hostname
name | log_line_prefix
name | log_lock_waits
name | log_min_duration_statement
name | log_min_error_statement
name | log_min_messages
name | log_parser_stats
name | log_planner_stats
name | log_rotation_age
name | log_rotation_size
name | log_statement
name | log_statement_stats
name | log_temp_files
name | log_timezone
name | log_truncate_on_rotation
name | logging_collector
name | maintenance_work_mem
name | max_connections
name | max_files_per_process
name | max_function_args
name | max_identifier_length
name | max_index_keys
name | max_locks_per_transaction
name | max_pred_locks_per_transaction
name | max_prepared_transactions
name | max_replication_slots
name | max_stack_depth
name | max_standby_archive_delay
name | max_standby_streaming_delay
name | max_wal_senders
name | max_worker_processes
name | password_encryption
name | port
name | post_auth_delay
name | pre_auth_delay
name | quote_all_identifiers
name | random_page_cost
name | restart_after_crash
name | search_path
name | segment_size
name | seq_page_cost
name | server_encoding
name | server_version
name | server_version_num
name | session_preload_libraries
name | session_replication_role
name | shared_buffers
name | shared_preload_libraries
name | sql_inheritance
name | ssl
name | ssl_ca_file
name | ssl_cert_file
name | ssl_ciphers
name | ssl_crl_file
name | ssl_ecdh_curve
name | ssl_key_file
name | ssl_prefer_server_ciphers
name | ssl_renegotiation_limit
name | standard_conforming_strings
name | statement_timeout
name | stats_temp_directory
name | superuser_reserved_connections
name | synchronize_seqscans
name | synchronous_commit
name | synchronous_standby_names
name | syslog_facility
name | syslog_ident
name | tcp_keepalives_count
name | tcp_keepalives_idle
name | tcp_keepalives_interval
name | temp_buffers
name | temp_file_limit
name | temp_tablespaces
name | TimeZone
name | timezone_abbreviations
name | trace_notify
name | trace_recovery_messages
name | trace_sort
name | track_activities
name | track_activity_query_size
name | track_counts
name | track_functions
name | track_io_timing
name | transaction_deferrable
name | transaction_isolation
name | transaction_read_only
name | transform_null_equals
name | unix_socket_directories
name | unix_socket_group
name | unix_socket_permissions
name | update_process_title
name | vacuum_cost_delay
name | vacuum_cost_limit
name | vacuum_cost_page_dirty
name | vacuum_cost_page_hit
name | vacuum_cost_page_miss
name | vacuum_defer_cleanup_age
name | vacuum_freeze_min_age
name | vacuum_freeze_table_age
name | vacuum_multixact_freeze_min_age
name | vacuum_multixact_freeze_table_age
name | wal_block_size
name | wal_buffers
name | wal_keep_segments
name | wal_level
name | wal_log_hints
name | wal_receiver_status_interval
name | wal_receiver_timeout
name | wal_segment_size
name | wal_sender_timeout
name | wal_sync_method
name | wal_writer_delay
name | work_mem
name | xmlbinary
name | xmloption
name | zero_damaged_pages

Share Button

PostgreSQL 9.4 – System Tables – Get Database/Table number, size and other general info for your database

Get information about your existing databases

select * from pg_database;

Show uptime of Postgres server (Not uptime of OS)

select date_trunc ('second', current_timestamp - pg_postmaster_start_time());

View how much space a database consumes

select pg_size_pretty(pg_database_size(current_database()));

View the largest tables in your database

\x
select relname, relpages
from pg_class 
order by relpages desc;

view how large a table is

select pg_size_pretty(pg_relation_size('name_of_table'));

or

\dt+ name_of_table

view how many tables a user owns

select  count(*)  from pg_tables where tableowner = 'seeker';

View how many tables a schema of your database has.

select count(*) from information_schema.tables where table_schema not in ('pg_catalog', 'information_schema');

View how many databases a server has.

select count(*)  from pg_database;

View size of all databases

select sum(pg_database_size(datname)) from pg_database;

List all roles

select * from information_schema.enabled_roles;

List all schemas

select schema_name from information_schema.schemata;
Share Button

Python/Postgresql – Query Database and present data in a plot

How to query a database using a Python script and present the data on a plot.

import pandas.io.data
import matplotlib.pyplot as plt
import pandas as pd
from pandas import DataFrame
import datetime
import pandas.io.data
import psycopg2
import re
import os


conn = psycopg2.connect("host='172.31.98.161' dbname='servers' user='seeker'")
cur = conn.cursor() #Create the cursor
cur.execute("select total_ram_raw, used_ram_raw, time_captured  from server_perf  where hostname='localhost.localdomain'")
rows = cur.fetchall()
conn.close()

df = pd.DataFrame(rows, columns=['TotalRam','UsedRAM', 'Time'])

df.to_csv("file.csv")
df.TotalRam = df.TotalRam.astype(float)
df.UsedRAM  = df.UsedRAM.astype(float)

df = pd.read_csv('file.csv', index_col='Time')
df[['TotalRam', 'UsedRAM']].plot()
plt.show()
Share Button

Python /PostgresqSQL 9.4 – Server Performance Data Capture V.2

In this second version, the scripts which capture data for the servers are using classes. In addition the the script that captures dynamic data from the remote servers captures additional data. In specific it captures RAM, HD usage in raw numbers and not only “humanized” format. The humanized fields were not taken out. As a result the appropriate tables had to be modified, which means the schema has changed as well. Also, the database and the scripts are now being installed via puppet. The Puppet manifests are far from polished. They need more work but they do work.

These scripts gather static and dynamic information from servers and insert that data into a PostgresSQL database. The static information is information that unless a major upgrade takes place it never changes. They Dynamic data is performance data of the servers. The purpose of the static data is to be able to query for dynamic information which being inserted to the database every X amount of minutes via cron.

These scripts work only on Linux/Unix based machines.

The PUPPET modules used to install the Database and the scripts are located here

The static information for the remote servers are as follows :

hostname
iface
ipaddress
OS
OSRel
OSKern
total_M
brand
Hz
cores
arch

The dynamic information for the remote servers are as follows :

hostname
iface
ipaddress
total_ram_hum
used_ram_hum
total_ram_raw
used_ram_raw
used_ram_perc
total_HD_hum
used_HD_hum
total_HD_raw
used_HD_raw
used_HD_perc
cpu_use_perc
swap_used_hum
swap_total_hum
swap_perc
swap_used_raw
swap_total_raw

The static.py script will need to be run only one time on the remote servers or when a major upgrade occurs that might change configuration regarding RAM, Partitions, IP, Operating System (even an upgrade), CPU, NIC replacement.

The dynamic.py script will be run on the remote servers via cron. It is the script that captures information which is being constantly changed, like memory, storage, swap usage. All this data is sent for insertion to the remote database. The script executes it self via cron and then sent to be inserted into a PostgreSQL database.

In both scripts data is entered into a dictionary and then a connection to the database is created in order to insert the data.

The Static.py script

#!/usr/bin/python
import psutil
import os
import math
import platform
import subprocess
import socket
import psycopg2
import netifaces as ni
import humanize
from cpuinfo import cpuinfo


class Static():
    def __init__(self):
        #NIC INFO
        self.hostname   = socket.gethostname()
        self.iface      = ni.interfaces()[1]
        self.ipaddress  = ni.ifaddresses(self.iface)[2][0]['addr']
        #---OS INFO
        #For Linux (RH-Debian) Operating Systems
        self.distro  = platform.dist()[0]
        self.release = platform.dist()[1]
        self.kernel  = platform.release()
        #For Windows Operating Systems
        self.osinfo_2_os    = platform.uname()[0]
        self.osinfo_2_ver   = platform.uname()[2]
        self.osinfo_2_rel   = platform.uname()[3]
        #----RAM INFO
        raw_totalM = psutil.virtual_memory().total
        self.total_M    = humanize.naturalsize(raw_totalM)
        #----CPU INFO
        self.info       = cpuinfo.get_cpu_info()
        self.brand      = self.info['brand']
        self.Hz         = self.info['hz_advertised']
        self.cores        = self.info['count']
        self.arch       = self.info['bits']

    def get_OS_make(self):
       if platform.system()  =="Linux":
           return self.distro, self.release, self.kernel
       elif platform.system()     =="Windows":
           return self.osinfo_2_os, self.osinfo_2_ver, self.osinfo_2_rel

info = Static()



hostname  = info.hostname
iface     = info.iface
ipaddress = info.ipaddress
OS        = info.get_OS_make()[0]
OSRel     = info.get_OS_make()[1]
OSKern    = info.get_OS_make()[2]
total_M   = info.total_M
brand     = info.brand
Hz        = info.Hz
cores     = info.cores
arch      = info.arch




#Create the Database PostgreSQL 9.4 connection.
conn = psycopg2.connect("host='172.31.98.161' dbname='servers' user='seeker'")
cur = conn.cursor() #Create the cursor
#Create a Dictionary to pass the value of each function.
server_info = {'hostname': hostname, 'iface':iface, 'ipaddress': ipaddress, 'OS': OS, 'OSRel': OSRel, 'OSKern': OSKern, 'total_M': total_M, 'brand': brand, 'Hz':Hz, 'cores': cores, 'arch': arch}
cur.execute("INSERT INTO servers(hostname, iface, ipaddress, OS, OSRel, OSKern, total_M, brand, Hz, cores, arch) VALUES ('%s','%s','%s','%s','%s','%s','%s','%s','%s','%s','%s')" % (hostname, iface, ipaddress, OS, OSRel, OSKern, total_M, brand, Hz, cores, arch))
#If this is not present the changes will not get commited.
conn.commit()

The Dynamic.py srcipt

#THIS FILE IS MANAGED BY PUPPET
#!/usr/bin/python
import psutil
import os
import math
import platform
import subprocess
import socket
import psycopg2
import netifaces as ni
import humanize
from cpuinfo import cpuinfo

class Dynamic():
    def __init__(self):
        #NIC INFO
        self.hostname   = socket.gethostname()
        self.iface      = ni.interfaces()[1]
        self.ipaddress  = ni.ifaddresses(self.iface)[2][0]['addr']
        #RAM USAGE INFO-------------------------------------------------------------------
        self.total_ram_hum       = humanize.naturalsize(psutil.virtual_memory().total)
        self.used_ram_hum        = humanize.naturalsize(psutil.virtual_memory().used)
        #---------Raw info
        self.total_ram_raw       = (psutil.virtual_memory().total)
        self.used_ram_raw        = (psutil.virtual_memory().used)
        self.used_ram_perc       = psutil.virtual_memory().percent
        #HD USAGE INFO-------------------------------------------------------------------
        self.total_HD_hum        = humanize.naturalsize(psutil.disk_usage('/').total)
        self.used_HD_hum         = humanize.naturalsize(psutil.disk_usage('/').used)
        #---------Raw info
        self.total_HD_raw        =(psutil.disk_usage('/').total)
        self.used_HD_raw         =(psutil.disk_usage('/').used)
        self.used_HD_perc        = psutil.disk_usage('/').percent
        #CPU USAGE INFO-------------------------------------------------------------------
        self.cpu_use_perc        = psutil.cpu_percent()
        #SWAP USAGE INFO-------------------------------------------------------------------
        self.swap_used_hum           = humanize.naturalsize(psutil.swap_memory().used)
        self.swap_total_hum          = humanize.naturalsize(psutil.swap_memory().total)
        self.swap_perc               = psutil.swap_memory()[3]
        #---------Raw info
        self.swap_used_raw           = (psutil.swap_memory().used)
        self.swap_total_raw          = (psutil.swap_memory().total)
    def export_to_csv(self):
        print self.hostname
info = Dynamic()

hostname            = info.hostname
iface               = info.iface
ipaddress           = info.ipaddress
total_ram_hum       = info.total_ram_hum
used_ram_hum        = info.used_ram_hum
total_ram_raw       = info.total_ram_raw
used_ram_raw        = info.used_ram_raw
used_ram_perc       = info.used_ram_perc
total_HD_hum        = info.total_HD_hum
used_HD_hum         = info.used_HD_hum
total_HD_raw        = info.total_HD_raw
used_HD_raw         = info.used_HD_raw
used_HD_perc        = info.used_HD_perc
cpu_use_perc        = info.cpu_use_perc
swap_used_hum       = info.swap_used_hum
swap_total_hum      = info.swap_total_hum
swap_perc           = info.swap_perc
swap_used_raw       = info.swap_used_raw
swap_total_raw      = info.swap_total_raw


conn = psycopg2.connect("host='172.31.98.161' dbname='servers' user='seeker'")
cur = conn.cursor() #Create the cursor
#Create a Dictionary to pass the value of each function.
server_info = {'hostname':hostname, 'iface': iface,'ipaddress': ipaddress, 'total_ram_hum': total_ram_hum, 'used_ram_hum': used_ram_hum, 'total_ram_raw': total_ram_raw, 'used_ram_raw':used_ram_raw,'used_ram_perc': used_ram_perc, 'HD_hum': total_HD_hum, 'used_HD_hum': used_HD_hum, 'total_HD_raw': total_HD_raw, 'used_HD_raw':used_HD_raw, 'used_HD_perc': used_HD_perc, 'cpu_use_perc': cpu_use_perc,'swap_used_hum':swap_used_hum, 'swap_total_hum': swap_total_hum, 'swap_perc': swap_perc, 'swap_used_raw': swap_used_raw, 'swap_total_raw': swap_total_raw}
cur.execute("INSERT INTO SERVER_PERF(hostname, iface, ipaddress, total_ram_hum, used_ram_hum, total_ram_raw, used_ram_raw, used_ram_perc, total_HD_hum, used_HD_hum, total_HD_raw,used_HD_raw,used_HD_perc, cpu_use_perc,swap_used_hum, swap_total_hum, swap_perc, swap_used_raw, swap_total_raw) VALUES ('%s','%s','%s','%s','%s','%s','%s','%s','%s','%s','%s','%s','%s','%s','%s','%s','%s','%s','%s')" % (hostname, iface, ipaddress, total_ram_hum, used_ram_hum, total_ram_raw, used_ram_raw,used_ram_perc, total_HD_hum, used_HD_hum, total_HD_raw, used_HD_raw, used_HD_perc, cpu_use_perc, swap_used_hum, swap_total_hum, swap_perc, swap_used_raw, swap_total_raw))
#If this is not present the changes will not get commited.
conn.commit()

The new Database schema is as follows:

pg_dump -U seeker -d servers -s -h 172.31.98.161 > servers_db_schema

--
CREATE DATABASE SERVERS;
CREATE ROLE seeker WITH PASSWORD 'Password!';
ALTER DATABASE SERVERS OWNER TO seeker;
ALTER ROLE seeker WITH LOGIN;
GRANT ALL PRIVILEGES ON DATABASE SERVERS to seeker;
-- PostgreSQL database dump
--

SET statement_timeout = 0;
SET lock_timeout = 0;
SET client_encoding = 'UTF8';
SET standard_conforming_strings = on;
SET check_function_bodies = false;
SET client_min_messages = warning;

--
-- Name: plpgsql; Type: EXTENSION; Schema: -; Owner:
--

CREATE EXTENSION IF NOT EXISTS plpgsql WITH SCHEMA pg_catalog;

--
-- Name: EXTENSION plpgsql; Type: COMMENT; Schema: -; Owner:
--

COMMENT ON EXTENSION plpgsql IS 'PL/pgSQL procedural language';

SET search_path = public, pg_catalog;

SET default_tablespace = '';

SET default_with_oids = false;

--
-- Name: server_perf; Type: TABLE; Schema: public; Owner: seeker; Tablespace:
--

CREATE TABLE server_perf (
hostname text NOT NULL,
iface text,
ipaddress inet NOT NULL,
total_ram_hum text,
used_ram_hum text,
total_ram_raw numeric(30,2),
used_ram_raw numeric(30,2),
used_ram_perc text,
total_hd_hum text,
used_hd_hum text,
total_hd_raw numeric(30,2),
used_hd_raw numeric(30,2),
used_hd_perc text,
cpu_use_perc text,
swap_used_hum text,
swap_total_hum text,
swap_perc text,
swap_used_raw numeric(30,2),
swap_total_raw numeric(30,2),
time_captured timestamp without time zone DEFAULT now()
);

ALTER TABLE server_perf OWNER TO seeker;

--
-- Name: servers; Type: TABLE; Schema: public; Owner: seeker; Tablespace:
--

CREATE TABLE servers (
hostname text NOT NULL,
iface text,
ipaddress inet NOT NULL,
os text,
osrel text,
oskern text,
total_m text,
brand text,
hz text,
cores numeric(4,1),
arch text
);

ALTER TABLE servers OWNER TO seeker;

--
-- Name: pk_hostname; Type: CONSTRAINT; Schema: public; Owner: seeker; Tablespace:
--

ALTER TABLE ONLY servers
ADD CONSTRAINT pk_hostname PRIMARY KEY (hostname);

--
-- Name: server_perf_hostname_fkey; Type: FK CONSTRAINT; Schema: public; Owner: seeker
--

ALTER TABLE ONLY server_perf
ADD CONSTRAINT server_perf_hostname_fkey FOREIGN KEY (hostname) REFERENCES servers(hostname);

--
-- Name: public; Type: ACL; Schema: -; Owner: postgres
--

REVOKE ALL ON SCHEMA public FROM PUBLIC;
REVOKE ALL ON SCHEMA public FROM postgres;
GRANT ALL ON SCHEMA public TO postgres;
GRANT ALL ON SCHEMA public TO PUBLIC;

--
-- PostgreSQL database dump complete
--
Share Button

Centos 7 PostgresSQL 9.4 – Install database and use a non default PGDATA location, auto start on Boot (systemd)

These instructions show how to install postgresSQL 9.4 on Centos7. It also shows how to use a non default location from where to run your database engine. This is done by altering the PGDATA variable which dictates where the database engine is running from. It should be noted that when installing Postgres, a user Postgres is also created. This user account has environmental variables that deviate from the other accounts. You can cat the /etc/passwd file and see the default location for the home directory. As a result in order to change the environmental variable PGDATA you will have to alter the .bash_profile in that location in order for the change to take effect.

 

Step 1) Install the yum Repo for  PostgresSQL 9.4

yum -y localinstall http://yum.postgresql.org/9.4/redhat/rhel-7-x86_64/pgdg-centos94-9.4-1.noarch.rpm

Step 2) Download the RPMS and install the database engine

yum -y install   postgresql94-contrib  postgresql94-server

Step 3) Create the directories from where the database will be running from. In this case it will be from /home/postgres

mkdir /home/postgres; chown -R postgres:postgres /home/postgres

Step 4) Initialize the database. 

cd  /usr/pgsql-9.4/bin/; ./initdb -D /home/postgres

Step 5)  Alter the default location from where your database will be running from.

vim /var/lib/pgsql/.bash_profile
#Alter the PGDATA as follows. See sample below
#PGDATA=/var/lib/pgsql/9.4/data
PGDATA=/home/postgres
export PGDATA

6) Configure systemd to start the database on boot-up

 vi /usr/lib/systemd/system/postgresql-9.4.service
# Note: avoid inserting whitespace in these Environment= lines, or you may
# break postgresql-setup.
# Location of database directory
#Environment=PGDATA=/var/lib/pgsql/9.4/data/
Environment=PGDATA=/home/postgres/
systemctl daemon-reload
systemctl enable  postgresql-9.4.service
systemctl start  postgresql-9.4.service
Share Button