Oracle logs in linux

Logging Overview

The Oracle Cloud Infrastructure Logging service is a highly scalable and fully managed single pane of glass for all the logs in your tenancy. Logging provides access to logs from Oracle Cloud Infrastructure resources. These logs include critical diagnostic information that describes how resources are performing and being accessed.

How Logging Works

Use Logging to enable, manage, and search logs. The three kinds of logs are the following:

  • Audit logs: Logs related to events emitted by the Oracle Cloud Infrastructure Audit service. These logs are available from the Logging Audit page, or are searchable on the Search page alongside the rest of your logs.
  • Service logs: Emitted by OCI native services, such as API Gateway , Events , Functions , Load Balancing , Object Storage , and VCN Flow Logs. Each of these supported services has pre-defined logging categories that you can enable or disable on your respective resources.
  • Custom logs: Logs that contain diagnostic information from custom applications, other cloud providers, or an on-premise environment. Custom logs can be ingested through the API, or by configuring the Unified Monitoring Agent . You can configure an OCI compute instance/resource to directly upload Custom Logs through the Unified Monitoring Agent . Custom logs are supported in both a virtual machine and bare metal scenario.

A log is a first-class Oracle Cloud Infrastructure resource that stores and captures log events collected in a given context. For example, if you enable Flow Logs on a subnet, it has its own dedicated log. Each log has an OCID and is stored in a log group. A log group is a collection of logs stored in a compartment. Logs and log groups are searchable, actionable, and transportable.

To get started, enable a log for a resource. Services provide log categories for the different types of logs available for resources. For example, the Object Storage service supports the following log categories for storage buckets: read and write access events. Read access events capture download events, while write access events capture write events. Each service can have different log categories for resources. The log categories for one service have no relationship to the log categories of another service. As a result, the Functions service uses different log categories than the Object Storage service.

When you enable a log, you must add it to a log group that you create. Log groups are logical containers for logs. Use log groups to organize and streamline management of logs by applying IAM policy or grouping logs for analysis. For more information, see Managing Logs and Log Groups.

Logs are indexed in the system, and searchable through the Console, API, and CLI. You can view and search logs on the Logging Search page. When searching logs, you can correlate across many logs simultaneously. For example, you can view results from multiple logs, multiple log groups, or even an entire compartment with one query. You can filter, aggregate, and visualize your logs. For more information, see Searching Logs.

After you enable a log, log entries begin to appear on the detail page for the log (see Enabling Logging for a Resource for more information). If you need more archiving support, you can use Service Connector Hub (archiving to object storage, write to stream, and so on). For more information on service logs, see Service Log Reference, and Service Connector Hub.

Note

You can view usage report detail for Logging by accessing Cost and Usage Reports.

Читайте также:  U8531s linux troubleshooting gl314

Logging Workshop

See the OCI Logging Workshop for step-by-step, lab-based instructions on setting up your environment, enabling service logs, creating custom application logs, searching logs, and exporting log content to Object Storage .

Logging APIs

Oracle Cloud Infrastructure Logging has the following APIs available:

Logging Concepts

The following concepts are essential to working with Logging.

Service Logs Critical diagnostic information from supported Oracle Cloud Infrastructure services. See Supported Services. Custom Logs Diagnostic information from custom applications, other cloud providers, or an on-premise environment. To ingest custom logs, call the API directly or configure the unified monitoring agent. Audit Logs Read-only logs from the Audit service, provided for you to analyze and search. Audit logs capture the information about API calls made to public endpoints throughout your tenancy. These include API calls made by the Console , Command Line Interface (CLI), Software Development Kits (SDK), your own custom clients, or other Oracle Cloud Infrastructure services. Log Groups Log groups are logical containers for logs. Use log groups to streamline log management, including applying IAM policy or searching sets of logs. You can move log groups from one compartment to another and all the logs contained in the log group moves with it. Service Log Category Services provide log categories for the different types of logs available for resources. For example, the Object Storage service supports the following log categories for storage buckets: read and write access events. Read access events capture download events, while write access events capture write events. Each service can have different log categories for resources. The log categories for one service have no relationship to the log categories of another service. Service Connector Hub

Service Connector Hub moves logging data to other services in Oracle Cloud Infrastructure . For example, use Service Connector Hub to alarm on log data, send log data to databases, and archive log data to Object Storage. For more information, see Service Connector Hub.

Unified Monitoring Agent The fluentd-based agent that runs on customer machines ( OCI instances), to help customers ingest custom logs. Agent Configuration A configuration of the Unified Monitoring Agent that specifies how custom logs are ingested.

Log Encryption

Resource Identifiers

Most types of Oracle Cloud Infrastructure resources have a unique, Oracle-assigned identifier called an Oracle Cloud ID (OCID). For information about the OCID format and other ways to identify your resources , see Resource Identifiers.

Ways to Access Oracle Cloud Infrastructure

You can access Oracle Cloud Infrastructure using the Console (a browser-based interface) or the REST API. Instructions for the Console and API are included in topics throughout this guide. For a list of available SDKs, see Software Development Kits and Command Line Interface.

To access the Console , you must use a supported browser. To go to the Console sign-in page, open the navigation menu at the top of this page and click Infrastructure Console . You prompted to enter your cloud tenant, your user name, and your password.

Authentication and Authorization

Each service in Oracle Cloud Infrastructure integrates with IAM for authentication and authorization, for all interfaces (the Console , SDK or CLI, and REST API).

An administrator in your organization needs to set up groupsВ , compartmentsВ , and policiesВ that control which users can access which services, which resources, and the type of access. For example, the policies control who can create new users, create and manage the cloud network, launch instances, create buckets, download objects, etc. For more information, see Getting Started with Policies. For specific details about writing policies for each of the different services, see Policy Reference.

If you’re a regular user (not an administrator) who needs to use the Oracle Cloud Infrastructure resources that your company owns, contact your administrator to set up a user ID for you. The administrator can confirm which compartment or compartments you should be using.

Читайте также:  Утилита ускоряющая работу windows

For administrators: Use the following topics to find examples of IAM policy for Logging:

Источник

Oracle logs in linux

The log files contain messages about the system, kernel, services, and applications. For those files that are controlled by the system logging daemon rsyslogd , the main configuration file is /etc/rsyslog.conf , which contains global directives, module directives, and rules.

Global directives specify configuration options that apply to the rsyslogd daemon. All configuration directives must start with a dollar sign ( $ ) and only one directive can be specified on each line. The following example specifies the maximum size of the rsyslog message queue:

The available configuration directives are described in the file /usr/share/doc/rsyslog- version-number /rsyslog_conf_global.html .

The design of rsyslog allows its functionality to be dynamically loaded from modules, which provide configuration directives. To load a module, specify the following directive:

Modules have the following main categories:

Input modules gather messages from various sources. Input module names always start with the im prefix (examples include imfile and imrelp ).

Filter modules allow rsyslogd to filter messages according to specified rules. The name of a filter module always starts with the fm prefix.

Library modules provide functionality for other loadable modules. rsyslogd loads library modules automatically when required. You cannot configure the loading of library modules.

Output modules provide the facility to store messages in a database or on other servers in a network, or to encrypt them. Output module names always starts with the om prefix (examples include omsnmp and omrelp ).

Message modification modules change the content of an rsyslog message.

Parser modules allow rsyslogd to parse the message content of messages that it receives. The name of a parser module always starts with the pm prefix.

String generator modules generate strings based on the content of messages in cooperation with rsyslog ‘s template feature. The name of a string generator module always starts with the sm prefix.

Input modules receive messages, which pass them to one or more parser modules. A parser module creates a representation of a message in memory, possibly modifying the message, and passes the internal representation to output modules, which can also modify the content before outputting the message.

A description of the available modules can be found at http://www.rsyslog.com/doc/rsyslog_conf_modules.html.

An rsyslog rule consists of a filter part, which selects a subset of messages, and an action part, which specifies what to do with the selected messages. To define a rule in the /etc/rsyslog.conf configuration file, specify a filter and an action on a single line, separated by one or more tabs or spaces.

You can configure rsyslog to filter messages according to various properties. The most commonly used filters are:

Expression-based filters, written in the rsyslog scripting language, select messages according to arithmetic, boolean, or string values.

Facility/priority-based filters filter messages based on facility and priority values that take the form facility . priority .

Property-based filters filter messages by properties such as timegenerated or syslogtag .

The following table lists the available facility keywords for facility/priority-based filters:

Security, authentication, or authorization messages.

Messages from system daemons other than crond and rsyslogd .

Line printer subsystem.

Network news subsystem.

Messages generated internally by rsyslogd .

The following table lists the available priority keywords for facility/priority-based filters, in ascending order of importance:

Normal but significant condition.

Immediate action required.

System is unstable.

All messages of the specified priority and higher are logged according to the specified action. An asterisk ( * ) wildcard specifies all facilities or priorities. Separate the names of multiple facilities and priorities on a line with commas ( , ). Separate multiple filters on one line with semicolons (;). Precede a priority with an exclamation mark ( ! ) to select all messages except those with that priority.

Читайте также:  Для чего используется справочная система windows

The following are examples of facility/priority-based filters.

Select all kernel messages with any priority.

Select all mail messages with crit or higher priority.

Select all daemon and kern messages with warning or err priority.

Select all cron messages except those with info or debug priority.

By default, /etc/rsyslog.conf includes the following rules:

You can send the logs to a central log server over TCP by adding the following entry to the forwarding rules section of /etc/rsyslog.conf on each log client:

where logsvr is the domain name or IP address of the log server and port is the port number (usually, 514).

On the log server, add the following entry to the MODULES section of /etc/rsyslog.conf :

where port corresponds to the port number that you set on the log clients.

To manage the rotation and archival of the correct logs, edit /etc/logrotate.d/syslog so that it references each of the log files that are defined in the RULES section of /etc/rsyslog.conf . You can configure how often the logs are rotated and how many past copies of the logs are archived by editing /etc/logrotate.conf .

It is recommended that you configure Logwatch on your log server to monitor the logs for suspicious messages, and disable Logwatch on log clients. However, if you do use Logwatch, disable high precision timestamps by adding the following entry to the GLOBAL DIRECTIVES section of /etc/rsyslog.conf on each system:

For more information, see the logrotate(8) , logwatch(8) , rsyslogd(8) and rsyslog.conf(5) manual pages, the HTML documentation in the /usr/share/doc/rsyslog-5.8.10 directory, and the documentation at http://www.rsyslog.com/doc/manual.html.

Copyright В© 2013, 2019, Oracle and/or its affiliates. All rights reserved. Legal Notices

Источник

Linux System Log Files

This article explains how to identify system log file on Linux, with specific reference to the information needed for the RHCSA EX200 and RHCE EX300 certification exams.

Remember, the exams are hands-on, so it doesn’t matter which method you use to achieve the result, so long as the end product is correct.

Location of System logs

The «/etc/rsyslog.conf» file defines the location of most of the the system log files. Most of the file is commented out, but the rules section defines the relevant locations.

As you can see, the majority of logging is done to the «/var/log» directory, so this is likely to be the first place you will look in the event of a problem. Probably the most common location is the «/var/log/messages» file.

A number of application services log to different locations. For example, the HTTPD service will log errors to the «/etc/httpd/logs/error_log» file by default. In addition, each virtual host defined in the «/etc/httpd/conf/httpd.conf» file can specify its own logging destination.

Log Rotation

The «syslog» file contains the log rotation instructions for the major system logs.

Analyzing Logs

Analyzing log files will typically start with identifying the relevant log file for your issue. If you don’t know which log file to check, go to the «/var/log» directory and look at the files available. If nothing jumps out at you as looking relevant, check the «/var/log/messages» file as a starting point.

Once you have found a file to analyze, you can read it using an editor (like vi), or perform file processing operations on it to pull out relevant text.

The » tail -f » command is useful for watching continuous writes to log files over a period of time.

For ideas about processing files, check out this article.

Источник

Оцените статью