Чистка логов postgresql linux

1С и Linux

Пишу для себя, чтобы не забыть как делал. 95 % рабочее. На комментарии отвечаю, когда увижу.

понедельник, 20 августа 2018 г.

Бэкап и восстановление базы 1С в бд postgresql, обслуживание базы, чистка логов PostgreSQL

Может возникнуть ситуация при которой файл .dt будет выгружаться средствами 1с,
но при обратной загрузке или загрузке в файловую базу, загружаться не будет (при ошибках в базе).
Файл .dt может считаться бэкапом только при условии проверки загрузки!
Поэтому необходимо организовать бэкап средствами базы данных.
/home/user/test/ — папка с доступом по ftp (test test) (ранее подготовлена)

Подготовка:
$ sudo mkdir /home/user/test/backup
$ sudo chmod -R 777 /home/user/test/backup
$ sudo apt-get -y install pigz

Бэкап с архивированием:
$ sudo su — postgres
$ pg_dump demo | pigz > /home/user/test/backup/demo.sql.gz

Разархивирование с сохранением архива
$ sudo su — postgres
$ unpigz -c /home/user/test/backup/demo.sql.gz > /home/user/test/backup/demo.sql

Создадим базу demotest (если еще не создана)
$ sudo su — postgres
$ createdb —username postgres -T template0 demotest

Восстановим в базу demotest:
$ psql -l
$ unpigz -c /home/user/test/backup/demo.sql.gz > /home/user/test/backup/demo.sql
$ psql demotest > /home/user/test/backup/backup.log
du -h -s /var/lib/postgresql/9.6/main/base >> /home/user/test/backup/backup.log
echo «——————————————-» >> /home/user/test/backup/backup.log
# Записываем информацию в лог с секундами
echo «`date +»%Y-%m-%d_%H-%M-%S»` Start backup demo» >> /home/user/test/backup/backup.log

# Бэкапим базу данных demo и сразу сжимаем
/usr/bin/pg_dump demo | pigz > /home/user/test/backup/$DATA-demo.sql.gz

echo «`date +»%Y-%m-%d_%H-%M-%S»` End backup demo» >> /home/user/test/backup/backup.log

# Записываем информацию в лог с секундами
echo «`date +»%Y-%m-%d_%H-%M-%S»` Start vacuumdb demo» >> /home/user/test/backup/backup.log

vacuumdb —verbose —analyze —full —quiet —dbname=demo

echo «`date +»%Y-%m-%d_%H-%M-%S»` End vacuumdb demo» >> /home/user/test/backup/backup.log

echo «——————————————-» >> /home/user/test/backup/backup.log
echo «`date +»%Y-%m-%d_%H-%M-%S»` Size database file: » >> /home/user/test/backup/backup.log
du -h -s /var/lib/postgresql/9.6/main/base >> /home/user/test/backup/backup.log
echo «——————————————-» >> /home/user/test/backup/backup.log

Тестовый запуск:
#$ cd /home/user/test/backup/
$ sudo su — postgres
$ sh /home/user/test/backup/backup-sql.sh

$ sudo su — postgres
$ crontab -e
Добавить в конец (сработает в 2:01):

Смотреть задания:
$ crontab -l

Регулярно восстанавливайте и проверяйте Бекапы средствами 1С!

Смотреть лог:
$ sudo su — postgres
$ nano /home/user/test/backup/backup.log

Если мы хотим тот же скрипт выполнить от root
модернизируем его предварительно отключив от postgres

$ sudo su — postgres
$ crontab -e
Закомментировать:
$ exit

Теперь заменим скрипт на следующий:
$ nano /home/user/test/backup/backup-sql.sh

#!/bin/sh
set -e
# останавливаем сервер 1С
systemctl stop srv1cv83.service
systemctl status srv1cv83.service >> /home/user/test/backup/backup.log
echo «——————————————-» >> /home/user/test/backup/backup.log
# Устанавливаем дату
DATA=`date +»%Y-%m-%d_%H-%M»`
echo «`date +»%Y-%m-%d_%H-%M-%S»` Size database file: » >> /home/user/test/backup/backup.log
du -h -s /var/lib/postgresql/9.6/main/base >> /home/user/test/backup/backup.log
echo «——————————————-» >> /home/user/test/backup/backup.log
# Записываем информацию в лог с секундами
echo «`date +»%Y-%m-%d_%H-%M-%S»` Start backup demo» >> /home/user/test/backup/backup.log

# Бэкапим базу данных demo и сразу сжимаем
cd /home/user/test/backup/
/bin/su postgres -c «/usr/bin/pg_dump demo | pigz > /home/user/test/backup/$DATA-demo.sql.gz»
echo «`date +»%Y-%m-%d_%H-%M-%S»` End backup demo» >> /home/user/test/backup/backup.log
sleep 2
echo «——————————————-» >> /home/user/test/backup/backup.log
# Записываем информацию в лог с секундами
echo «`date +»%Y-%m-%d_%H-%M-%S»` Start vacuumdb demo» >> /home/user/test/backup/backup.log
/bin/su postgres -c «/usr/bin/vacuumdb —verbose —analyze —full —quiet —username postgres —dbname=demo»
echo «`date +»%Y-%m-%d_%H-%M-%S»` End vacuumdb demo» >> /home/user/test/backup/backup.log
echo «——————————————-» >> /home/user/test/backup/backup.log
echo «`date +»%Y-%m-%d_%H-%M-%S»` Size database file: » >> /home/user/test/backup/backup.log
du -h -s /var/lib/postgresql/9.6/main/base >> /home/user/test/backup/backup.log
echo «——————————————-» >> /home/user/test/backup/backup.log
# запускаем сервер 1Сsystemctl start srv1cv83.service
systemctl status srv1cv83.service >> /home/user/test/backup/backup.log
echo «——————————————-» >> /home/user/test/backup/backup.log

$ sudo -i
# crontab -e
Добавить в конец (сработает в 2:01):

Источник

Безопасное удаление (чистка) логов mysql-bin в MySQL или MariaDB

Файлы mysql-bin.xxxxxx представляют из себя бинарные логи со всеми запросами к базе. Они необходимы для репликации данных или восстановления информации в случае необходимости.

Так данные файлы со временем могут занимать много дискового пространства, необходимо выполнить их чистку или, вовсе, настроить автоматическое удаление.

Данная инструкция не привязана к какой-либо системе — она подойдет как для Windows, так и Linux, например, Ubuntu или CentOS. Так как MariaDB — ответвление от MySQL, инструкция также актуально и для нее.

Прежде чем чистить логи, необходимо убедиться, что для СУБД не настроена репликация данных с другими серверами. Если репликация есть, необходимо убедиться, что все данные корректно скопированы. Удаление лога, для которого еще не прошла синхронизация приведет к необходимости восстанавливать репликацию.

Ручная чистка логов

Запросы выполняются из командной оболочки MySQL.

Для удаления конкретного bin-файла:

> PURGE BINARY LOGS TO ‘mysql-bin.000145’;

* где mysql-bin.000145 — имя файла с логами.

Для удаления логов за определенный период:

> PURGE BINARY LOGS BEFORE ‘2017-05-07 00:00:00’;

* удаляем логи до 5-о мая 2017 года.

PURGE BINARY LOGS BEFORE DATE(NOW() — INTERVAL 90 DAY) + INTERVAL 0 SECOND;

* удаляем все, оставляем логи за последние 90 дней.

Автоматическое удаление

Открываем конфигурационный файл СУБД:

или в более ранних версиях:

и в секцию [mysqld] добавляем следующее:

* в данном примере мы настроили срок хранения логов в 90 дней.

Настройки применятся после перезагрузки сервера баз данных:

systemctl restart mysql || systemctl restart mariadb

Также будет выполнена чистка логов, которые старше выставленной даты (90 дней в нашем примере).

Читайте также:  Windows 10 не дает установить драйвер amd

Источник

How To Start Logging With PostgreSQL

By Juraj Holub
Last Validated on May 18 2021 · Originally Published on May 18, 2021 · Viewed 1.1k times

Introduction

This tutorial shows you how to configure and view different PostgreSQL logs. PostgreSQL is an open-source relational database based on SQL (Structured Query Language). PostgreSQL offers a complex logging daemon called logging collector. In general, the database is the basis of almost every backend, and administrators want to log this service.

In this tutorial, you will do the following:

  • You will install the PostgreSQL server and view syslog records related to this service. Next, you will view the database custom log.
  • You will connect to the PostgreSQL server and view metadata about the logging collector. You will enable this daemon.
  • You will understand the most important PostgreSQL logging configuration setting. You will view, edit and reload server configuration.
  • You will simulate some slow query and check this incident in the new log.

Prerequisites

  • Ubuntu 20.04 distribution including the non-root user with sudo access.
  • Basic knowledge of SQL languages (understanding of simple select query statement).
  • Understanding of systemd and systemctl. All basics are covered in our How to Control Systemd with Systemctl tutorial.
  • You should know the principle of rotation logs. You can consult our How to Control Journald with Journalctl tutorial.

Step 1 — Installing Server and Viewing Syslog Records

The PostgreSQL server is maintained by the command-line program psql . This program access the interactive terminal for accessing the database. The process of starting, running or stopping the PostgreSQL server is logged into syslog. These syslog records don’t include any information about SQL queries. It is useful for the analysis of the server.

First of all, let’s install the PostgreSQL server. Ubuntu 20.04 allows to install the PostgreSQL from default packages with the apt install (installation requires sudo privilege):

The first command will update Ubuntu repositories, and the second will download and install required packages for the PostgreSQL server.

Now, the server is installed and started. The process of server starting is recoded in syslog. You can view all syslog records related to the PostgreSQL with journalctl :

The option -u defines to show only syslog records related to service postgresql . You’ll see the program’s output appear on the screen:

The output shows the records about the first server start.

Step 2 — Viewing the Custom Server Log

Except for syslog records, PostgreSQL maintains its own log. This log includes much more detailed information than general syslog records, and it is widely adjustable. The log is stored in the default log directory for Linux systems ( /var/log ).

If you installed the PostgreSQL server, you can list the directory /var/log and find a new subdirectory postgresql with ls :

You’ll see the program’s output appear on the screen:

The output shows also directory postgresql . This directory contains by default single log postgresql-12-main.log . Let’s view the content of this file with cat :

You’ll see the program’s output appear on the screen:

The output shows that the file stores plain text records about the PostgreSQL server initialisation, and running. You can see that these records are much more detailed than the syslog records.

Step 3 — Connecting to Server and Checking Log Collector

By default, the PostgreSQL logs are maintained by the syslog daemon. However, the database includes a dedicated logging collector (daemon independent of syslog) that offers a more advanced log configuration specialized for logging the database.

First of all, let’s connect to the PostgreSQL server and check the logging configuration. You can connect to the PostgreSQL server as a user postgres (this user account is created by default within installation):

The command requires sudo because you are changing the user role. You will be redirected to PostgreSQL interactive terminal. Now, you can view system variables related to the logging configuration.

You can view the status of the log collector by executing the command show :

The command show displays the value of the system variable logging_collector . You will see the following output:

As you can see, the PostgreSQL log collector is by default disabled. We will enable it in the next step. Now, let’s disconnect from the server by executing the exit command:

You will be redirected back to the terminal.

Step 4 — Enabling the PostgreSQL Log Collector

The PostgreSQL server includes various system variables that specify the configuration of logging. All these variables are stored in the configuration file postgresql.conf . This file is by default stored in the directory /etc/postgresql/12/main/ . The following list explains the meaning of the most important server log variables:

  • logging_collector : We already know this variable from the previous step. However, for completeness, it is included in this list because it is one of the most important log configuration settings.
  • log_destination : Sets the destination for server log output.
  • log_directory : It determines the directory in which log files will be created.
  • log_filename : It sets the file names of the created log files.
  • log_line_prefix : Each log record includes, besides the message itself, a header prefix with important metadata (for example, timestamp, user, process id, and others). You can specify the header fields in this variable.
  • log_hostname : If this variable is disabled then the log will record only the IP address of clients. If it is enabled then the log will map the IP address to hostname. However, you should keep in mind that DNS translation cost resources.
  • log_timezone : The variable holds geographical location. It converts the timestamp into the relevant local format.
  • log_connections : If you enable this variable then the log will record all authorized connections, or attempts to the server. It could be beneficial for security auditing, but it could be also a heavy load for the server if you have thousands of clients.
  • log_disconnections : This variable is complementary to the previous one. By enabling it, you set up to log all authorised disconnection. Typically, you want to enable only one of these two variables.
  • log_statement : The variable determines which SQL statement will be logged.
  • log_duration : It is the boolean variable. If it is enabled then all SQL statements will be recorded together with their duration. This setting could decrease database performance. However, it could be beneficial for determining slow queries.
  • log_min_duration_statement : The variable is extension of previous setting. It specifies the minimal duration of SQL statement in a millisecond that will be logged.
  • log_rotation_age : The integer value determines the maximal time period of minutes until log rotation.
  • log_rotation_size : The value set the maximal size of the log file in kilobytes. If the log reaches this value, it will be rotated.
Читайте также:  Как писать с iphone с windows

Each of these variables can be viewed through the psql terminal. If you want to view them, you can follow the previous step, where we already view the variable logging_collector . For further information about configuration variables see the official documentation.

Enabling Log Collector

You can enable log collector daemon by editing the postgresql.conf ( sudo required):

The file contains the following lines that hold configuration variables logging_collector and log_destination (by default commented out):

Uncomment both variables, set logging_collector to on and log_destination to stderr :

Now, you can save the file. You set up log destination to stderr because log collector read input from there. The configuration is now changed but the log daemon is not activated yet. If you want immediately apply the new configuration rules then you must restart the PostgreSQL server with systemctl ( sudo required):

Now, the PostgreSQL server reloads the configuration and enables a log collector. If you want to change any variable in the file postgresql.conf and immediately apply changes, you must restart the service.

Step 5 — Configuring Log Collector

Now, you will set up the variables described in the previous step. Keep in mind that each organisation has unique logging requirements. This tutorial shows you the possible setup, but you should configure values that match your use case. All these variables are stored in the file /etc/postgresql/12/main/postgresql.conf . If you want to change any of these variables, then edit this file and restart the PostgreSQL server as we did in the previous step.

Configuring Log Name, Directory and Rotation

The naming of logs becomes important if you manage logs from multiple services and servers. The log files created by the log collector are named by the regular expression determined in the variable log_filename . The name could include a constant string but also a formatted timestamp. The default log name is postgresql-%Y-%m-%d_%H%M%S.log . The pattern %Y-%m-%d_%H%M%S determines formatted timestamp:

  • %Y : The year as a decimal number including the century.
  • %m : The month as a decimal number (range 01 to 12).
  • %d : The day of the month as a decimal number (range 01 to 31).
  • %H : The hour as a decimal number using a 24-hour clock (range 00 to 23).
  • %M : The minute as a decimal number (range 00 to 59).
  • %S : The second as a decimal number (range 00 to 60).

The created file could be named, for example, as postgresql-2021-01-01_23:59:59:59.log .

The file-system directory of the log is determined by the variable log_directory . You should keep in mind that Linux typically stores all logs into the /var/log/ directory.

The log collector allows configuring log rotation. It is the same log rotation principle as the syslog logrotate but this rotation is maintained by PostgreSQL log controller daemon instead of syslog. If you do not know what is log rotation, you can read How to Manage Logs with Logrotate on Ubuntu 20.04. The log rotation is configured by following two values in the postgresql.conf :

  • log_rotation_age : If the value is set to 0 then the log rotation is disabled. The default value is 1 day, but this value depends on your use case. The integer without units refers to the number of seconds.
  • log_rotation_size : If the value is set to 0 then the log rotation is disabled, otherwise the automatic log file rotation will occur after a specified number of kilobytes.

You can view all these variables in the postgresql.conf :

The file contains the following lines that hold described configuration variables (by default commented out):

Now, you can close the file. You can potentially edit these values, but in such a case you need sudo access.

Configuring Log Structure

You can configure the structure of each log record by various configuration variables. Firstly, let’s set up a record header (information prefixed to each log line). The record prefix structure is determined by the variable log_line_prefix , which holds the printf style string. The following list shows the most important escape characters:

  • %t : Timestamp without milliseconds ( %m is with miliseconds). If you want to configure timestamp format to a specific local time then you can set up variable log_timezone to chosen geographical location. For example America/New_York , Europe/Paris , or any other name from the IANA timezone database.
  • %p : Process ID.
  • %q : If it is non-session process then stop record at this point.
  • %d : Name of database.
  • %u : User name.
  • %h : Remote hostname or IP address. By default, the IP address is recorded. You can set up DNS translation to hostname by enabling variable log_hostname to value on . However, this setting is usually too expensive because it might impose a non-negligible performance penalty.
  • %a : Application name.
  • %l : Numbering the records in each session (every session start from number 1).
Читайте также:  При включении ноутбука запуск windows

The log_line_prefix with value ‘%t [%p] %q%d@%u, %h, %a, %l ‘ will hold, for example, following log record:

Once again, you can view all these variables in the postgresql.conf :

The file contains the following lines that hold described configuration variables:

As you can see, the DNS translation to the hostname is by default disabled, the default log line prefix record timestamp with milliseconds, process, user and IP address, and the timezone are set to geographical location preset from OS.

Configuring Log Collector to Record Selected SQL Commands

You can configure which type of action will be logged with the log collector. There are two boolean variables that enable logging of the following database actions:

  • Logs each attempt, or successful connection to the database. This is by default disabled. You can enable it by setting the variable log_connections to on .
  • Logs the duration of each completed SQL statement. By default it is disabled. You can enable it by setting the variable log_duration to the value on . If you want to log only slow queries then you can set the minimum execution time above which all statements will be logged. The variable log_min_duration_statement holds the minimal value as an integer in milliseconds.

Within the log_connections variable, there is also a log_disconnections variable that logs successful disconnections from the database. A database usually logs a large number of connection attempts, soo you want to enable just one of them to save resources.

At last, you can set up which SQL statements are logged. This setting determines variable log_statement , which can hold one of the following four values:

  • none : The SQL statements logging is disabled.
  • ddl : The log collector will log all data definition statements ( CREATE , ALTER , and DROP ).
  • mod : Same as the ddl plus data-modifying statements ( UPDATE , INSERT , DELETE and others).
  • all : All SQL statements are recorded.

Once again, you can view all these values in the postgresql.conf :

The file contains the following lines that hold described configuration variables:

As you can see, by default, all these database actions are not logged.

Step 6 — Viewing Collector Logs

If you set up all described variables in postgresql.conf and restart the server then you can view the content of the new logs.

For demonstration, we will use following postgresql.conf configuration:

Executing SQL Statement

First of all, let’s connect to the PostgreSQL server and execute some SQL statement. You can connect to the PostgreSQL server as a user postgres ( sudo required):

You will be redirected to PostgreSQL interactive terminal. Let’s execute some SQL statement that will be logged:

The command select call function pg_sleep() that fall asleep for 1 second (our configuration records every statement longer than 250ms).

Now, let’s disconnect from the server by executing the exit command:

You will be redirected back to the terminal.

Viewing Record of Executed SQL Statement in the Log

Now, let’s view the new collector log that holds a record of SQL statement executions. Our configuration specifies log directory to /var/log/postgresql . Let’s list the content of this directory with ls :

You’ll see the program’s output appear on the screen:

The output shows, within the default log file postgresql-12-main.log , a new log postgresql-2021-05-15_115637.log . You can validate that the name of the log match with the configuration string in variable log_filename .

Let’s view the content of this log with a cat (the sudo is required because this file is maintained by the system):

You’ll see the program’s output appear on the screen:

The output shows all records in this log. First records refer to the startup of the server. You can see that all records are in the format as specified in the variable log_line_prefix . You can view the last three records that hold information about the connection to the database through psql as a user postgres and executing command select pg_sleep(1) . The records include also a time of SQL statement execution.

As you can see, the logging collector with this configuration generates a relatively huge amount of records in a short time. You should find the best configuration that matches your use case.

Conclusion

In this tutorial, you installed the PostgreSQL server. You viewed the syslog records related to this service and the database custom log. You viewed the log collector configuration. You understood the meaning of the most important settings in the configuration file. At last, you enabled, configured and viewed a logging collector.

Источник

Оцените статью