- Yum oracle linux ocfs2
- 2.3.3В Downloading the Oracle Linux Yum Server Repository Files
- ChapterВ 7В Managing Oracle Cluster File System Version 2
- 7.1В About OCFS2
- 7.2В Creating a Local OCFS2 File System
- 7.3В Installing and Configuring OCFS2
- 7.3.1В Preparing a Cluster for OCFS2
- 7.3.2В Configuring the Firewall
- 7.3.3В Configuring the Cluster Software
- 7.3.4В Creating the Configuration File for the Cluster Stack
- 7.3.5В Configuring the Cluster Stack
- 7.3.6В Configuring the Kernel for Cluster Operation
- 7.3.7В Starting and Stopping the Cluster Stack
- 7.3.8В Creating OCFS2 volumes
Yum oracle linux ocfs2
OracleВ® Linux 6
Administrator’s Solutions Guide
The software described in this documentation is either in Extended Support or Sustaining Support. See https://www.oracle.com/us/support/library/enterprise-linux-support-policies-069172.pdf for more information.
Oracle recommends that you upgrade the software described by this documentation as soon as possible.
2.3.3В Downloading the Oracle Linux Yum Server Repository Files
The Oracle Linux yum server provides a direct mapping of all of the Unbreakable Linux Network (ULN) channels that are available to the public without any specific support agreement. The repository labels used for each repository on the Oracle Linux yum server map directly onto the channel names on ULN. See Oracle В® Linux: Unbreakable Linux Network User’s Guide for Oracle Linux 6 and Oracle Linux 7 for more information about the channel names and common suffixes used for channels and repositories.
Prior to January 2019, Oracle shipped a single yum repository configuration file for each Oracle Linux release. This configuration file is copied into /etc/yum.repos.d/public-yum-ol6.repo at installation, but can also be downloaded from the Oracle Linux yum server directly to obtain updates.
The original configuration file is deprecated in favor of modular repository files that are managed and updated automatically via yum in the form of RPM packages that are more targeted in scope. For example, core repository configuration files required for Oracle Linux 6 are available in the oraclelinux-release-el6 package. This package includes all of the repository configuration required to install base packages for the release, including packages from the ol6_latest , ol6_addons repositories and all of the supported repositories for UEK.
The modular yum repository configuration files released as packages that can be maintained via yum can help to simplify repository management and also ensure that your yum repository definitions are kept up to date automatically, whenever you update your system.
A list of all available RPM files to manage all of the possible yum repository configurations for your release can be obtained by running:
To install the yum repository configuration for a particular set of software that you wish to use, use yum to install the corresponding package. For example, to install the yum repository configuration for the Oracle Linux Software Collection Library, run:
If your system is still configured to use the original single yum repository configuration file at /etc/yum.repos.d/public-yum-ol6.repo , you should update your system to transition to the current approach to handling yum repository configuration. To do this, ensure that your system is up to date and then run the /usr/bin/ol_yum_configure.sh script:
The /usr/bin/ol_yum_configure.sh script checks the /etc/yum.repos.d/public-yum-ol6.repo file to determine which repositories are already enabled and installs the appropriate corresponding packages before renaming the original configuration file to /etc/yum.repos.d/public-yum-ol6.repo.sav to disable it in favor of the more recent modular repository configuration files.
If, for some reason, you manage to remove all configuration to access the Oracle Linux yum server repositories, you should create a temporary yum repository configuration file at /etc/yum.repos.d/ol6-temp.repo with the following as the minimum required content:
Then reinstall the oraclelinux-release-el6 package to restore the default yum configuration:
For more information on manually setting up Oracle Linux yum server repository configuration files, see https://yum.oracle.com/getting-started.html.
You can enable or disable repositories in each repository configuration file by setting the value of the enabled directive to 1 or 0 for each repository listed in the file, as required. The preferred method of enabling or disabling repositories under Oracle Linux 6 is to use the yum-config-manager command provided in the yum-utils package.
Copyright В© 2012, 2021, Oracle and/or its affiliates. Legal Notices
Источник
ChapterВ 7В Managing Oracle Cluster File System Version 2
This chapter describes how to configure and use Oracle Cluster File System Version 2 (OCFS2) file system.
7.1В About OCFS2
Oracle Cluster File System version 2 (OCFS2) is a general-purpose, high-performance, high-availability, shared-disk file system intended for use in clusters. It is also possible to mount an OCFS2 volume on a standalone, non-clustered system.
Although it might seem that there is no benefit in mounting ocfs2 locally as compared to alternative file systems such as ext4 or btrfs , you can use the reflink command with OCFS2 to create copy-on-write clones of individual files in a similar way to using the cp —reflink command with the btrfs file system. Typically, such clones allow you to save disk space when storing multiple copies of very similar files, such as VM images or Linux Containers. In addition, mounting a local OCFS2 file system allows you to subsequently migrate it to a cluster file system without requiring any conversion. Note that when using the reflink command, the resulting filesystem behaves like a clone of the original filesystem. This means that their UUIDs are identical. When using reflink to create a clone, you must change the UUID using the tunefs.ocfs2 command. See SectionВ 7.3.10, “Querying and Changing Volume Parameters” for more information.
Almost all applications can use OCFS2 as it provides local file-system semantics. Applications that are cluster-aware can use cache-coherent parallel I/O from multiple cluster nodes to balance activity across the cluster, or they can use of the available file-system functionality to fail over and run on another node in the event that a node fails. The following examples typify some use cases for OCFS2:
Oracle VM to host shared access to virtual machine images.
Oracle VM and VirtualBox to allow Linux guest machines to share a file system.
Oracle Real Application Cluster (RAC) in database clusters.
Oracle E-Business Suite in middleware clusters.
OCFS2 has a large number of features that make it suitable for deployment in an enterprise-level computing environment:
Support for ordered and write-back data journaling that provides file system consistency in the event of power failure or system crash.
Block sizes ranging from 512 bytes to 4 KB, and file-system cluster sizes ranging from 4 KB to 1 MB (both in increments of powers of 2). The maximum supported volume size is 16 TB, which corresponds to a cluster size of 4 KB. A volume size as large as 4 PB is theoretically possible for a cluster size of 1 MB, although this limit has not been tested.
Extent-based allocations for efficient storage of very large files.
Optimized allocation support for sparse files, inline-data, unwritten extents, hole punching, reflinks, and allocation reservation for high performance and efficient storage.
Indexing of directories to allow efficient access to a directory even if it contains millions of objects.
Metadata checksums for the detection of corrupted inodes and directories.
Extended attributes to allow an unlimited number of name:value pairs to be attached to file system objects such as regular files, directories, and symbolic links.
Advanced security support for POSIX ACLs and SELinux in addition to the traditional file-access permission model.
Support for user and group quotas.
Support for heterogeneous clusters of nodes with a mixture of 32-bit and 64-bit, little-endian (x86, x86_64, ia64) and big-endian (ppc64) architectures.
An easy-to-configure, in-kernel cluster-stack (O2CB) with a distributed lock manager (DLM), which manages concurrent access from the cluster nodes.
Support for buffered, direct, asynchronous, splice and memory-mapped I/O.
A tool set that uses similar parameters to the ext3 file system.
7.2В Creating a Local OCFS2 File System
To create an OCFS2 file system that will be locally mounted and not associated with a cluster, use the following command:
For example, create a locally mountable OCFS2 volume on /dev/sdc1 with one node slot and the label localvol :
You can use the tunefs.ocfs2 utility to convert a local OCFS2 file system to cluster use, for example:
This example also increases the number of node slots from 1 to 8 to allow up to eight nodes to mount the file system.
7.3В Installing and Configuring OCFS2
This section describes procedures for setting up a cluster to use OCFS2.
7.3.1В Preparing a Cluster for OCFS2
For best performance, each node in the cluster should have at least two network interfaces. One interface is connected to a public network to allow general access to the systems. The other interface is used for private communication between the nodes; the cluster heartbeat that determines how the cluster nodes coordinate their access to shared resources and how they monitor each other’s state. These interface must be connected via a network switch. Ensure that all network interfaces are configured and working before continuing to configure the cluster.
You have a choice of two cluster heartbeat configurations:
Local heartbeat thread for each shared device. In this mode, a node starts a heartbeat thread when it mounts an OCFS2 volume and stops the thread when it unmounts the volume. This is the default heartbeat mode. There is a large CPU overhead on nodes that mount a large number of OCFS2 volumes as each mount requires a separate heartbeat thread. A large number of mounts also increases the risk of a node fencing itself out of the cluster due to a heartbeat I/O timeout on a single mount.
Global heartbeat on specific shared devices. You can configure any OCFS2 volume as a global heartbeat device provided that it occupies a whole disk device and not a partition. In this mode, the heartbeat to the device starts when the cluster comes online and stops when the cluster goes offline. This mode is recommended for clusters that mount a large number of OCFS2 volumes. A node fences itself out of the cluster if a heartbeat I/O timeout occurs on more than half of the global heartbeat devices. To provide redundancy against failure of one of the devices, you should therefore configure at least three global heartbeat devices.
FigureВ 7.1 shows a cluster of four nodes connected via a network switch to a LAN and a network storage server. The nodes and the storage server are also connected via a switch to a private network that they use for the local cluster heartbeat.
It is possible to configure and use OCFS2 without using a private network but such a configuration increases the probability of a node fencing itself out of the cluster due to an I/O heartbeat timeout.
7.3.2В Configuring the Firewall
Configure or disable the firewall on each node to allow access on the interface that the cluster will use for private cluster communication. By default, the cluster uses both TCP and UDP over port 7777.
To allow incoming TCP connections and UDP datagrams on port 7777, use the following commands:
7.3.3В Configuring the Cluster Software
Ideally, each node should be running the same version of the OCFS2 software and a compatible version of the Oracle Linux Unbreakable Enterprise Kernel (UEK). It is possible for a cluster to run with mixed versions of the OCFS2 and UEK software, for example, while you are performing a rolling update of a cluster. The cluster node that is running the lowest version of the software determines the set of usable features.
Use yum to install or upgrade the following packages to the same version on each node:
If you want to use the global heartbeat feature, you must install ocfs2-tools-1.8.0-11 or later.
7.3.4В Creating the Configuration File for the Cluster Stack
You can create the configuration file by using the o2cb command or a text editor.
To configure the cluster stack by using the o2cb command:
Use the following command to create a cluster definition.
For example, to define a cluster named mycluster with four nodes:
The command creates the configuration file /etc/ocfs2/cluster.conf if it does not already exist.
For each node, use the following command to define the node.
The name of the node must be same as the value of system’s HOSTNAME that is configured in /etc/sysconfig/network . The IP address is the one that the node will use for private communication in the cluster.
For example, to define a node named node0 with the IP address 10.1.0.100 in the cluster mycluster :
Note that OCFS2 only supports IPv4 addresses.
If you want the cluster to use global heartbeat devices, use the following commands.
You must configure global heartbeat to use whole disk devices. You cannot configure a global heartbeat device on a disk partition.
For example, to use /dev/sdd , /dev/sdg , and /dev/sdj as global heartbeat devices:
Copy the cluster configuration file /etc/ocfs2/cluster.conf to each node in the cluster.
Any changes that you make to the cluster configuration file do not take effect until you restart the cluster stack.
The following sample configuration file /etc/ocfs2/cluster.conf defines a 4-node cluster named mycluster with a local heartbeat.
If you configure your cluster to use a global heartbeat, the file also include entries for the global heartbeat devices.
The cluster heartbeat mode is now shown as global , and the heartbeat regions are represented by the UUIDs of their block devices.
If you edit the configuration file manually, ensure that you use the following layout:
The cluster: , heartbeat: , and node: headings must start in the first column.
Each parameter entry must be indented by one tab space.
A blank line must separate each section that defines the cluster, a heartbeat device, or a node.
7.3.5В Configuring the Cluster Stack
To configure the cluster stack:
Run the following command on each node of the cluster:
The following table describes the values for which you are prompted.
Load O2CB driver on boot (y/n)
Whether the cluster stack driver should be loaded at boot time. The default response is n .
Cluster stack backing O2CB
The name of the cluster stack service. The default and usual response is o2cb .
Cluster to start at boot (Enter «none» to clear)
Enter the name of your cluster that you defined in the cluster configuration file, /etc/ocfs2/cluster.conf .
Specify heartbeat dead threshold (>=7)
The number of 2-second heartbeats that must elapse without response before a node is considered dead. To calculate the value to enter, divide the required threshold time period by 2 and add 1. For example, to set the threshold time period to 120 seconds, enter a value of 61. The default value is 31, which corresponds to a threshold time period of 60 seconds.
If your system uses multipathed storage, the recommended value is 61 or greater.
Specify network idle timeout in ms (>=5000)
The time in milliseconds that must elapse before a network connection is considered dead. The default value is 30,000 milliseconds.
For bonded network interfaces, the recommended value is 30,000 milliseconds or greater.
Specify network keepalive delay in ms (>=1000)
The maximum delay in milliseconds between sending keepalive packets to another node. The default and recommended value is 2,000 milliseconds.
Specify network reconnect delay in ms (>=2000)
The minimum delay in milliseconds between reconnection attempts if a network connection goes down. The default and recommended value is 2,000 milliseconds.
To verify the settings for the cluster stack, enter the /sbin/o2cb.init status command:
In this example, the cluster is online and is using local heartbeat mode. If no volumes have been configured, the O2CB heartbeat is shown as Not active rather than Active .
The next example shows the command output for an online cluster that is using three global heartbeat devices:
Configure the o2cb and ocfs2 services so that they start at boot time after networking is enabled:
These settings allow the node to mount OCFS2 volumes automatically when the system starts.
7.3.6В Configuring the Kernel for Cluster Operation
For the correct operation of the cluster, you must configure the kernel settings shown in the following table:
Specifies the number of seconds after a panic before a system will automatically reset itself.
If the value is 0, the system hangs, which allows you to collect detailed information about the panic for troubleshooting. This is the default value.
To enable automatic reset, set a non-zero value. If you require a memory image ( vmcore ), allow enough time for Kdump to create this image. The suggested value is 30 seconds, although large systems will require a longer time.
Specifies that a system must panic if a kernel oops occurs. If a kernel thread required for cluster operation crashes, the system must reset itself. Otherwise, another node might not be able to tell whether a node is slow to respond or unable to respond, causing cluster operations to hang.
On each node, enter the following commands to set the recommended values for panic and panic_on_oops :
To make the change persist across reboots, add the following entries to the /etc/sysctl.conf file:
7.3.7В Starting and Stopping the Cluster Stack
The following table shows the commands that you can use to perform various operations on the cluster stack.
/sbin/o2cb.init status
Check the status of the cluster stack.
/sbin/o2cb.init online
Start the cluster stack.
/sbin/o2cb.init offline
Stop the cluster stack.
/sbin/o2cb.init unload
Unload the cluster stack.
7.3.8В Creating OCFS2 volumes
You can use the mkfs.ocfs2 command to create an OCFS2 volume on a device. If you want to label the volume and mount it by specifying the label, the device must correspond to a partition. You cannot mount an unpartitioned disk device by specifying a label. The following table shows the most useful options that you can use when creating an OCFS2 volume.
-b block-size
—block-size block-size
Specifies the unit size for I/O transactions to and from the file system, and the size of inode and extent blocks. The supported block sizes are 512 (512 bytes), 1K, 2K, and 4K. The default and recommended block size is 4K (4 kilobytes).
-C cluster-size
—cluster-size cluster-size
Specifies the unit size for space used to allocate file data. The supported cluster sizes are 4K, 8K, 16K, 32K, 64K, 128K, 256K, 512K, and 1M (1 megabyte). The default cluster size is 4K (4 kilobytes).
—fs-feature-level= feature-level
Allows you select a set of file-system features:
Enables support for the sparse files, unwritten extents, and inline data features.
Enables only those features that are understood by older versions of OCFS2.
Enables all features that OCFS2 currently supports.
—fs_features= feature
Allows you to enable or disable individual features such as support for sparse files, unwritten extents, and backup superblocks. For more information, see the mkfs.ocfs2(8) manual page.
-J size= journal-size
—journal-options size= journal-size
Specifies the size of the write-ahead journal. If not specified, the size is determined from the file system usage type that you specify to the -T option, and, otherwise, from the volume size. The default size of the journal is 64M (64 MB) for datafiles , 256M (256 MB) for mail , and 128M (128 MB) for vmstore .
-L volume-label
—label volume-label
Specifies a descriptive name for the volume that allows you to identify it easily on different cluster nodes.
-N number
—node-slots number
Determines the maximum number of nodes that can concurrently access a volume, which is limited by the number of node slots for system files such as the file-system journal. For best performance, set the number of node slots to at least twice the number of nodes. If you subsequently increase the number of node slots, performance can suffer because the journal will no longer be contiguously laid out on the outer edge of the disk platter.
-T file-system-usage-type
Specifies the type of usage for the file system:
Database files are typically few in number, fully allocated, and relatively large. Such files require few metadata changes, and do not benefit from having a large journal.
Mail server files are typically many in number, and relatively small. Such files require many metadata changes, and benefit from having a large journal.
Virtual machine image files are typically few in number, sparsely allocated, and relatively large. Such files require a moderate number of metadata changes and a medium sized journal.
For example, create an OCFS2 volume on /dev/sdc1 labeled as myvol using all the default settings for generic usage on file systems that are no larger than a few gigabytes. The default values are a 4 KB block and cluster size, eight node slots, a 256 MB journal, and support for default file-system features.
Create an OCFS2 volume on /dev/sdd2 labeled as dbvol for use with database files. In this case, the cluster size is set to 128 KB and the journal size to 32 MB.
Create an OCFS2 volume on /dev/sde1 with a 16 KB cluster size, a 128 MB journal, 16 node slots, and support enabled for all features except refcount trees.
Do not create an OCFS2 volume on an LVM logical volume. LVM is not cluster-aware.
You cannot change the block and cluster size of an OCFS2 volume after it has been created. You can use the tunefs.ocfs2 command to modify other settings for the file system with certain restrictions. For more information, see the tunefs.ocfs2(8) manual page.
If you intend the volume to store database files, do not specify a cluster size that is smaller than the block size of the database.
The default cluster size of 4 KB is not suitable if the file system is larger than a few gigabytes. The following table suggests minimum cluster size settings for different file system size ranges:
Источник