Understanding "yum clean all" and Stale Repos

Linux RedHat

When it comes to managing packages on a Linux system, the YUM (Yellowdog Updater Modified) package manager is widely used for its ease of use and robust features. One of the handy commands in YUM is yum clean all, which helps you keep your system clean and optimized. In this blog post, we will delve into the functionalities of yum clean all and explore how it can help you clear accumulated cache and improve system performance.

Cleaning Options

The yum clean command can be used with specific options to clean individual components. Here are some of the available options:

  • packages: Cleans package-related cache files
  • metadata: Removes metadata and other files related to enabled repositories
  • expire-cache: Cleans the cache for metadata that has expired
  • rpmdb: Cleans the YUM database cache
  • plugins: Clears any cache maintained by YUM plugins
  • all: Performs a comprehensive cleaning, covering all the above options

Clean Everything

The yum clean all command serves as a one-stop solution to clean various elements that accumulate in the YUM cache directory over time. It offers several cleaning options, allowing you to target specific items or perform a comprehensive clean.

It’s essential to note that yum clean all does not clean untracked “stale” repositories. This means that if a repository is no longer in use or has been disabled, its cache will not be cleared by this command. We’ll explore an alternative method to handle untracked repositories shortly.

Analyzing Cache Usage

Before diving into the cleaning process, it’s helpful to analyze the cache usage on your system. You can use the following command to check the cache usage:

$ df -hT /var/cache/yum/

Filesystem               Type  Size  Used Avail Use% Mounted on
/dev/mapper/rootvg-varlv xfs   8.0G  4.6G  3.5G  57% /var

Cleaning with ‘yum clean all’

When you execute yum clean all, the command will remove various cached files and improve system performance. However, you may sometimes notice a warning regarding other repository data:

Other repos take up 1.1 G of disk space (use --verbose for details)

If you run it with the --verbose flag, you should see a list of stale/untracked repos

$ sudo yum clean all --verbose

Not loading "rhnplugin" plugin, as it is disabled
Loading "langpacks" plugin
Loading "product-id" plugin
Loading "search-disabled-repos" plugin
Not loading "subscription-manager" plugin, as it is disabled
Adding en_US.UTF-8 to language list
Config time: 0.110
Yum version: 3.4.3
Cleaning repos: epel rhel-7-server-ansible-2-rhui-rpms rhel-7-server-rhui-extras-rpms rhel-7-server-rhui-optional-rpms rhel-7-server-rhui-rh-common-rpms rhel-7-server-rhui-rpms
              : rhel-7-server-rhui-supplementary-rpms rhel-server-rhui-rhscl-7-rpms rhui-microsoft-azure-rhel7
Operating on /var/cache/yum/x86_64/7Server (see CLEAN OPTIONS in yum(8) for details)
Disk usage under /var/cache/yum/*/* after cleanup:
0      enabled repos
0      disabled repos
1.1 G  untracked repos:
  1.0 G  /var/cache/yum/x86_64/7Server/rhui-rhel-7-server-rhui-rpms
  90 M   /var/cache/yum/x86_64/7Server/rhui-rhel-server-rhui-rhscl-7-rpms
  9.8 M  /var/cache/yum/x86_64/7Server/rhui-rhel-7-server-rhui-extras-rpms
  6.4 M  /var/cache/yum/x86_64/7Server/rhui-rhel-7-server-rhui-supplementary-rpms
  5.3 M  /var/cache/yum/x86_64/7Server/rhui-rhel-7-server-dotnet-rhui-rpms
  1.6 M  /var/cache/yum/x86_64/7Server/rhui-rhel-7-server-rhui-rh-common-rpms
4.0 k  other data:
  4.0 k  /var/cache/yum/x86_64/7Serve

Manually Removing Untracked Repository Files

To handle untracked repository files, you can manually remove them from the cache directory with rm:

$ sudo rm -rf /var/cache/yum/*

Refreshing the Cache

Next time you run commands like yum check-update or any other operation that refreshes the cache, YUM will rebuild the package list and recreate the cache directory for the enabled repositories.

Check the Cache Usage After Cleanup

After performing the cleanup, you can verify the reduced cache usage. Use the df command again to check the cache size:

$ df -hT /var/cache/yum/

Doing it all with Ansible

And if you want to, you can use the Ansible playbook below to automate the YUM cache purge and re-creation:

---
- name: Delete yum cache and run yum check-update
  # Add your hosts accordingly below
  hosts: webserver
  become: yes
  tasks:

    - name: Check /var usage with df command
      shell: |
        df -hT /var | awk '{print $6}' | tail -1 | tr -d '%'
      register: var_usage_output

    - name: Display /var usage information
      debug:
        var: var_usage_output.stdout

    ##-- Block starts --------------------------------------------------------------
    - when: var_usage_output.stdout|int > 55
      block:

      - name: Delete yum cache directory
        file:
          path: /var/cache/yum
          state: absent

      - name: Update YUM cache
        yum:
          name: ''
          update_cache: yes

      - name: Check /var usage with df command
        shell: |
          df -hT /var | awk '{print $6}' | tail -1
        register: var_usage_after_cleanup_output

      - name: Display /var usage information
        debug:
          var: var_usage_after_cleanup_output.stdout
    ##-- block ends ----------------------------------------------------------------

Conclusion

Regularly cleaning the YUM cache with the yum clean all command can help optimize your system’s performance by clearing accumulated files. By understanding the available cleaning options and handling untracked repositories, you can ensure that your YUM cache remains streamlined and efficient. Keep your system running smoothly and enjoy the benefits of a clean YUM cache!

Remember, maintaining a clean and optimized system contributes to a seamless Linux experience.

Linux find Command

Linux bash

The Linux find command is a very versatile tool in the pocket of any Linux administrator. It allows you to quickly search for files and take action based on the results.

The basic construct of the command is:

find [options] [path] [expression]

The options part of the command controls some basic functionality for find and I’m not going to cover it here. I will instead quickly look at the components for the expression part of the command, and provide some useful examples.

Expressions are composed of tests, actions, global options, positional options and operators:

  • Test - returns true or false (e.g.: -mtime, -name, -size)
  • Actions - Acts on something (e.g.: -exec, -delete)
  • Global options - Affect the operations of tests and actions (e.g.:-depth, -maxdepth,)
  • Positional options - Modifies tests and actions (e.g.: -follow, -regextype)
  • Operators - Include, join and modify items (e.g.: -a, -o)

Operators

Operators are the logical OR and AND of find, as well as negation and expression grouping.

Operator Description
\( expr \) Force precedence
! or -not Negate
-a or -and Logical AND. If two expressions are given without -a find will take it as implied. expr2 is not evaluated if expr1 is false
-o of -or Logical OR. expr2 is not evaluated if expr1 is true
, Both expr1 and expr2 are always evaluated. The value of expr1 is discarded

You can use the operators for repeated search options with different values.

Example 1: Find files with multiple file extensions

find ${jboss_dplymnt_dir} -type f -size -2k \( \
  -name '*.deployed' -o -name '*.dodeploy' \
  -o -name '*.skipdeploy' -o -name '*.isdeploying' \
  -o -name '*.failed' -o -name '*.isundeploying' \
  -o -name '*.pending' -o -name '*.undeployed' \
\)

Example 2: find MOV or MP4 files that do not have ‘h265’ in the file name

find . \( \
  ! -name '*h265*' -a \( \
    -name '*.mp4' -o -name '*.MP4' \
    -o -name '*.mov' -o -name '*.MOV' \
  \) \
\)

Or to negate search options.

Example 3: find files that don’t finish with the ‘.log’ extension

find . -type f -not -name '*.log'

Example 4: Excludes everything in the folder named ‘directory’

find -name "*.js" -not -path "./directory/*"

Global Options

Two very useful global options are -maxdepth and -mindepth.

  • maxdepth levels: Descend at most levels (a non-negative integer) levels of directories below the starting-points. -maxdepth 0 means only apply the tests and actions to the starting-points themselves.
  • mindepth levels: Do not apply any tests or actions at levels less than levels (a non-negative integer). -mindepth 1 means process all files except the starting-points.

Example 5: find all files with ‘.txt’ extension on the current dir and do not descend into subdirectories

find . -maxdepth 1 -name '*.txt'

Tests

This is where you can target specific properties about the files that you are searching for. Some of my preferred tests are:

  • -iname - Like -name, but the match is case insensitive
  • -size - Matches based on size (e.g.: -size -2k, -size +1G)
  • -user - File Belongs to username
  • -newer - File is newer than file. It can be a powerful option

Example 6: Find files for user

find . -type f -user linustorvalds

Example 7: Find files larger than 1GB

find . -type f -size +1G

Example 8: Here’s a hidden gem! Let’s say you need to find what files a program is creating. All you need to do is create a file, run the program, and then use -newer to find what files were created

# Create a file
touch file_marker

# Here I run the script or program
./my_script.sh

# Now I can use the file I created to find newer files that were created by the script
find . -type f -newer 'file_marker'

Actions

Actions will execute the specified action against the matched filenames. Some useful actions are:

  • -ls - Lists matched files with ls -dils
  • -delete - Deletes the matched files
  • -exec - Execute the specified command against the files
  • -ok - Prompts user for before executing command
  • -print0 - Prints the full name of the matched files followed by a null character

Real-life Scenarios

Get a confirmation prompt

Use -ok to get a confirmation prompt before executing the action on the matched files.

find . -type f -name '*.txt' -ok rm -rf {} \;
< rm ... ./file_10.txt > ? y
< rm ... ./file_9.txt > ? y
< rm ... ./file_8.txt > ?

Deleting files

Whenever possible, use -delete instead of -exec rm -rf {} \;.

find . -type f -name "*.bak" -exec rm -f {} \;

Instead use:

find . -type f -name "*.bak" -delete

Using xargs and {} +

The command bellow will run into a problem if files or directories with embedded spaces in their names are encountered. xargs will treat each part of that name as a separate argument:

find /usr/include -name "*.h" -type f | xargs grep "#define UINT"

There are two ways to avoid that. You could tell both find and xargs to use a NUL character as a separator:

find /usr/include -name "*.h" -type f -print0 | xargs -0 grep "#define UINT"

Or, you could avoid use of xargs altogether and let find invoke grep directly:

find /usr/include -name "*.h" -type f -exec grep "#define UINT" {} +

That final + tells find that grep will accept multiple file name arguments. Like xargs, find will put as many names as possible into each invocation of grep.

Find and Move

Find files and move them to a different location.

  • -v - For verbose
  • -t - Move all SOURCE arguments into DIRECTORY
  • -exec command {} + - As we saw before, this variant of the -exec action runs the specified command on the selected files, but the command line is built by appending each selected file name at the end; the total number of invocations of the command will be much less than the number of matched files

No need for \; at the end.

find . -maxdepth 1 -name 'sa-*' -exec mv -v -t rsa-todelete/ {} +

Search PDF content

Here we search the contents of PDF files for specif text. Two options are shown below, each require the install of an additional package (pdfgrep and ripgrep-all).

With pdfgrep:

find . -iname '*.pdf' -exec pdfgrep [pattern] {} +

With ripgrep-all:

find . -iname '*.pdf' | xargs rga -H PURGE_TRAN_LOG.sh

Check if files exists

One of the problems with find is that it doesn’t return true or false based on search results. But we can still use it on our if condition by greping the result.

if find /var/log -name '*.log' | grep -q log ; then
  echo "File exists"
fi

Conclusion

You should now have a better understanding of the find command, as well as some nifty use cases to impress you co-workers.

If you found something useful or want to share an interesting use for find, please leave a comment below.

Getting Started with Ansible Molecule

Linux Ansible Docker

Overview

If you haven’t heard about Ansible Molecule you came to the right place. I will cover what it is, it’s basic usage, and how to install it and get started.

What it is

Ansible Molecule is a project (tool) that can greatly improve your Ansible workflow. It allows you to automate your tasks (which is great for CI) by providing means of running different and idempotent tests against your roles.

And Molecule is based on Ansible, so you are essentially using Ansible to test Ansible. How cool is that!?

What it does

To put it in simple words, Molecule tests your code. With Molecule you can easily run multiple tests against your code and make sure it works before deploying to an environment.

Some of the tests you can run are:

  • Yaml lint
  • Ansible lint
  • Ansible syntax
  • Run the Ansible role against a virtual machine
  • Test idempotence (run the same Ansible role against the same virtual machine for the second time)

Folder Structure

When invoked Molecule creates a single role folder with the usual Ansible structure. Inside this role folder an additional molecule folder is created. This is where the main Molecule configuration lives.

$ tree -d
.
├── defaults
├── files
├── handlers
├── molecule
│   └── default
│       ├── converge.yml
│       ├── molecule.yml
│       └── verify.yml
├── tasks
├── templates
├── tests
└── vars

Test Types and Scenarios

Test scenarios can be configured inside the molecule folder and each scenario should have it’s own folder.

A ‘default’ scenario is created automatically with the following tests enabled (you can change them according to your needs):

  • lint
  • destroy
  • dependency
  • syntax
  • create
  • prepare
  • converge
  • idempotence
  • side_effect
  • verify
  • destroy

Drivers

Three different drivers (Vagrant, Docker, and OpenStack) can be used to create virtual machines. These virtual machines are used to test our roles.

On this tutorial we will be using Docker as our driver.

Installing Molecule

Molecule can be installed via pip or with distro packages (depending on your distro). You can mix and match and install Molecule via pip and specific components (like ansible or ansible-lint) via your distro’s package manager.

Notes:

  • On Windows Molecule can only be installed via WSL
  • I’m assuming you already have Ansible installed and will not cover it here

Windows Install (Ubuntu WSL)

On Ubuntu Molecule needs to be installed via pip. If perhaps you are running another distro in WSL, you can check if the packages are available with your package manager (if you choose to install that way).

Install python-pip.

$ sudo apt install -y python-pip

Create a python virtual environment for Molecule. The software we are going to install will reside in the virtual environment (we can use the environment many times).

$ python3 -m venv molecule

Activate the environment (see that the prompt changes).

$ source molecule/bin/activate
(molecule-venv) $

Install the wheel package.

(molecule-venv) $ python3 -m pip install wheel

Install ansible and ansible-lint. You can do this via Python/Molecule, or via the OS.

OS

$ apt install ansible ansible-lint

Via molecule

(molecule-venv) $ python3 -m pip install "molecule[ansible]"  # or molecule[ansible-base]

Install molecule and docker.

(molecule-venv) $ python3 -m pip install "molecule[docker,lint]"

Linux Install (Arch)

If you are running Arch (I use Arch BTW) you can install everything with pacman (or with pip like we did for Windows WSL).

$ sudo /usr/bin/pacman -S yamllint ansible-lint molecule molecule-docker

Getting Started

Creating a Molecule Role

We will now create and configure a role with Molecule using Docker as the driver.

Run molecule init role [role name] -d docker:

This should have created a new role folder with the required molecule files inside.

molecule.yml

Holds the default configuration for molecule.

./myrole/molecule/default/molecule.yml

---
dependency:
  name: galaxy
driver:
  name: docker
platforms:
  - name: instance
    image: docker.io/pycontribs/centos:8
    pre_build_image: true
provisioner:
  name: ansible
verifier:
  name: ansible

converge.yml

This is the playbook file that molecule will run.

./myrole/molecule/default/converge.yml

---
- name: Converge
  hosts: all
  tasks:
    - name: "Include myrole"
      include_role:
        name: "myrole"

Running a Simple Test

Let’s use the existing configuration in molecule.yml and converge.yml to run a simple test.

Edit the tasks file (myrole/tasks/main.yml) and add a simple task (you can use the example below with the debug module):

---
# tasks file for myrole

- debug:
    msg: "System {{ inventory_hostname }} has uuid {{ ansible_product_uuid }}"

Run molecule converge (from the role folder) to build the image and run the playbook.

Now that we got the very basic stuff done, let’s move into a bit more advanced steps so we can better understand Molecule and use it with our code.

Additional Steps and Configuration

Adding Lint

Lint is not enabled by default, but that can be easily changed by editing molecule.yml and adding the lint key to it:

dependency:
  name: galaxy
driver:
  name: docker
platforms:
  - name: instance
    image: docker.io/pycontribs/centos:8
    pre_build_image: true
provisioner:
  name: ansible
verifier:
  name: ansible
lint: |
  set -e
  yamllint .
  ansible-lint .

Now run molecule lint. You should get a lot of warnings due to to information missing in the meta folder:

As instructed by the output of the command, you can quiet or disable the messages by adding a warn_list or skip_list to .ansible_lint

Tip: You can also fine tune yaml linting by editing .yamllint

Running a Full Test

So far we have only run two tests:

  • Converge (dependency, create, prepare converge)
  • Lint (yaml lint and Ansible lint)

Let’s run a full test (default scenario) on our role. Remember, the full test will run dependency, lint, syntax, create, prepare, converge, idempotence, side_effect, verify, cleanup and destroy.

Run molecule test.

Running Roles from Another Folder

You can also use Molecule to test roles from another folder (which makes molecule very flexible for testing).

Let’s say I have the following folder structure:

.
└── Ansible
    ├── Molecule
    └── web-project

Inside my web-project folder I have a role called ‘apache’ that installs (guess what?) httpd.

./Ansible/web-project/roles/apache/tasks/main.yml

---
# tasks file for apache

- name: Installs Apache
  yum:
     name: httpd
     state: present

I can easily modify my existing converge.yml to include that role:

./Ansible/Molecule/myrole/molecule/default/converge.yml

---
- name: Converge
  hosts: all
  tasks:
    - name: "Include myrole"
      include_role:
        name: "../../web-project/roles/apache"

Also edit molecule.yml so we are linting that external folder:

./Ansible/Molecule/myrole/molecule/default/molecule.yml

lint: |
  set -e
  yamllint ../../web-project
  ansible-lint ../../web-project

And then run molecule converge (from the Molecule role folder) to test. Because molecule converge does not include the destroy command, I can login to the container ( with molecule login) and check if httpd was installed:

Tip:

  • When testing against multiple containers you can use molecule login --host [container name]
  • You can also use docker cli to connect to the container - docker exec -ti [container name] /bin/bash

Note: On this example nothing other than the role is imported (e.g. variables and config from ansible.cfg are not imported)

Additional Container Configuration

We can configure different options for our container in molecule.yml under the platforms key section. Configuration here is similar to a docker compose file.

  • name - Name of the container
  • image - Image for the container. Can be local or from a remote registry
  • pre_build_image - Instructs to use pre-build image (pull or local) instead of building from molecule/default/Dockerfile.j2
  • privileged - Give extended privileges (a “privileged” container is given access to all devices)
  • capabilities - Same as --cap-add=SYS_ADMIN. Grants a smaller subset of capabilities to the container compared to the privileged option
  • command - Same as Dockerfile RUN
  • groups - Assigns the container to one or more Ansible groups

Example:

---
dependency:
  name: galaxy
driver:
  name: podman
platforms:
  - name: rhel8
    image: registry.access.redhat.com/ubi8/ubi-init
    tmpfs:
      - /run
      - /tmp
    volumes:
      - /sys/fs/cgroup:/sys/fs/cgroup:ro
    capabilities:
      - SYS_ADMIN
    command: "/usr/sbin/init"
    pre_build_image: true
  - name: ubuntu
    image: geerlingguy/docker-ubuntu2004-ansible
    tmpfs:
      - /run
      - /tmp
    volumes:
      - /sys/fs/cgroup:/sys/fs/cgroup:ro
    capabilities:
      - SYS_ADMIN
    command: "/lib/systemd/systemd"
    pre_build_image: true
provisioner:
  name: ansible
verifier:
  name: ansible
lint: |
  set -e
  yamllint .
  ansible-lint .

Conclusion

I’m hoping I was able to provide you with enough information on how to get started with Molecule. If you have any comments, suggestions or corrections, please leave them in the comment box below.

How to Convert Standard Partition to Logical Volume

Linux

Preamble

When I bought my new and shinny Lenovo P1 at the beginning of the year I choose a configuration that had 2 NVMe drives; one of 500GB and the other of 256GB. Innocently I thought that this would be plenty. But since I started hogging my test VMs and creating videos with my GoPro and DaVinci Resolve (while we are here, check out my YouTube channel), that storage space quickly became too small and I knew I had to update my storage!

Taking advantage of a great Black Friday deal I scored a 1TB Samsung 970 Evo Plus NVMe drive for almost half the price.

Preparation

I had my hard drives organized into partitions as follows:

nvme0n1             259:0    0 476.9G  0 disk
├─nvme0n1p1         259:1    0   250G  0 part /home
└─nvme0n1p2         259:2    0 226.9G  0 part /mnt/storage2
nvme1n1             259:3    0   256G  0 disk
├─nvme1n1p1         259:4    0   512M  0 part /efi
├─nvme1n1p2         259:6    0    32G  0 part [SWAP]
├─nvme1n1p2         259:7    0   110G  0 part /
└─nvme1n1p2         259:8    0    96G  0 part /mnt/storage1

The partitions ‘storage1’ and ‘storage2’ are used for video and virtual machine storage.

My plan was to replace the smaller drive and copy my partitions to the new drive as LV. This is pretty straight forward, you just need some additional steps if your root partition will be in the LV. I’m however leaving my EFI partition as a standard partition for simplicity and compatibility.

If you are using this post as a guide, please take the time to compare my configuration to yours and make sure that it will work for you. You will most likely need to tweak a few steps. Also note that I’m running Arch, so my instructions will be directed to it.

And finally, if you are adding a new drive, don’t forget to do it now. I installed the new drive and used a USB-C enclosure to access my old drive.

⚠️ WARNING: We will be using dd to copy the old data. Make sure to select the right partition and more importatnly, the right direction because dd will erase all your data.

New NVMe installed

Instructions

Creating the LV and Copying the Data

a. Power off the machine and boot into an Arch install ISO

b. Create an EFI and a LVM partition using gdisk (I’m not getting in details on this, but there are plenty of tutorials online). Make sure the EFI partition is the same size as your old one

Here’s my layout. My the new drive shows as /dev/nvme1n1

Device           Start        End    Sectors  Size Type
/dev/nvme1n1p1    2048    1050623    1048576  512M EFI System
/dev/nvme1n1p2 1050624 1953525134 1952474511  931G Linux LVM

c. Copy the old EFI partition to the new EFI partition with dd (note that my old drive mounted via the USB-C enclosure now shows as /dev/sdb)

# dd if=/dev/sdb1 of=/dev/nvme1n1p1 bs=4M

d. Create the PV/VG

# vgcreate root_vg /dev/nvme1n1p2

e. Create the logical volumes. Make sure that they have the same size as the old partitions

# lvcreate -n swap -L 32G root_vg
# lvcreate -n root -L 110G root_vg
# lvcreate -n storage -L 96G root_vg

f. Formart the swap partition (there’s not need to copy it, so we just recreate it)

# mkswap -L swap /dev/root_vg/swap

g. Copy the root partition

# dd if=/dev/sdb3 of=/dev/root_vg/root bs=4M

h. Copy the storage partition (or any other partition if you had it)

# dd if=/dev/sdb4 of=/dev/root_vg/storage bs=4M

i. Our new drive is now created and populated with the old data

# lsblk
nvme0n1             259:0    0 476.9G  0 disk
├─nvme0n1p1         259:1    0   250G  0 part /home
└─nvme0n1p2         259:2    0 226.9G  0 part /mnt/storage2
nvme1n1             259:3    0 931.5G  0 disk
├─nvme1n1p1         259:4    0   512M  0 part /efi
└─nvme1n1p2         259:5    0   931G  0 part
  ├─root_vg-swap    254:0    0    32G  0 lvm  [SWAP]
  ├─root_vg-root    254:1    0   110G  0 lvm  /
  └─root_vg-storage 254:2    0    96G  0 lvm  /mnt/storage1

Additional Configuration

We now need to fix our EFI and GRUB to make sure we can boot into our LV partition.

a. Mount root

# mkdir /mnt/root

# mount /dev/root_vg/root /mnt/root

b. Enter chroot of our copied data

# arch-chroot /mnt/root

c. Mount our EFI partition

# mount /dev/nvme1n1p1 /efi

d. Fix fstab for the swap partition (because we did not copy our swap partition, a new UUID was generated. We need to change /etc/fstab with that new partition UUID)

UUID=d9e1b4f1-40eb-4bb6-908c-33012c7c0902	none		swap		defaults	0 0

e. Edit /etc/mkinitcpio.conf and add lvm2 to HOOKS

HOOKS=(base udev ... block lvm2 filesystems)

f. Generate the ramdisk environment to boot the kernel

# mkinitcpio -P

g. Re-install GRUB

# grub-install --target=x86_64-efi --efi-directory=/efi --bootloader-id=GRUB

h. Enable the LVM module for GRUB by editing /etc/default/grub and adding the lvm module to GRUB_PRELOAD_MODULES

# Preload both GPT and MBR modules so that they are not missed
GRUB_PRELOAD_MODULES="part_gpt part_msdos lvm"

i. Re-create GRUB config

# grub-mkconfig -o /boot/grub/grub.cfg

j. Unmount /efi

# umount /efi

k. Exit chroot

# exit

l. Unmount root

# umount /mnt/root

m. Power off, remove the old drive and the Arch iso and reboot

# poweroff

If everything was done correctly you should be able to boot your system normally. If something is not working don’t panic. Your data should be safe and sound on your old drive. Review your steps, do additional research for anything specific to your configuration and try again.


Reference:

RHCSA v8: Linux Logical Volume Manager

Linux RedHat RHCSA

Disclaimer

This blog post is essentially my study notes for the RHCSA v8 exam on Logical Volume Manager. It should be taken as a quick reference document (or a cheat sheet of some sorts).

It covers the following exam subjects:

  • Configure local storage
    • Create and remove physical volumes
    • Assign physical volumes to volume groups
    • Create and delete logical volumes
  • Create and configure file systems
    • Extend existing logical volumes

Note: There could be additional information needed for the exam that is not covered here.

Overview

Logical Volume Manager (LVM) is a device mapper target that provides logical volume management for the Linux kernel. LVM allows a pool of space to manage storage.

PV - Physical Volumes are directly related to hard drives or partitions

VG - A Volume Group can have multiple physical Volumes

LV - A Logical Volume sits inside a Volume Group and it’s what is assigned to a file system.

The filesystem sits on top of the logical volume and it will be formatted to a specifc fs type (vfat, xfs, ext4) and mounted (/root, /home, /mnt/*, etc).

When a physical disk is setup for LVM, metadata is written at the beginning of the disk for normal usage, and at the end of the disk for backup usage.

Overview of Creating a Physical Volume

First create initialize the disks to be used by LVM with pvcreate (Initialize physical volume(s) for use by LVM)

# pvcreate /dev/device /dev/device2 /dev/device3

Then we create a volume group with vgcreate (Create a volume group)

# vgcreate [vg name] /dev/device /dev/device2 /dev/device3

Optionally use the -s switch to set the Physical Extent size (for LVM2, the only effect this flag has is that when using too many physical volumes, the LVM tools will perform better)

Create the Logical Volume (4GB)

# lvcreate -L 4g [vg name] -n [lv name]

Flags:

  • -n - set the Logical Volume name
  • -l - use extents rather than a specified size

And finally create the file system

# mkfs.xfs /dev/[vgname]/[lvname]

Working With LVM

Physical Volumes

Commands reference:

  • lvm (8) - LVM2 tools
  • pvcreate (8) - Initialize physical volume(s) for use by LVM
  • pvdisplay (8) - Display various attributes of physical volume(s)
  • pvremove (8) - Remove LVM label(s) from physical volume(s)
  • pvs (8) - Display information about physical volumes

Creating Physical Volumes

Physical volumes can be created using full disks or partitions.

# pvcreate /dev/part1 /dev/part2

Or

# pvcreate /dev/sdb /dev/sdc

Deleting Physical Volumes

pvremove wipes the label on a device so that LVM will no longer recognize it as a PV. A PV cannot be removed from a VG while it is used by an active LV.

Removing a PV

# pvremove /dev/sdb /dev/sdc
  Labels on physical volume "/dev/sdb" successfully wiped.
  Labels on physical volume "/dev/sdc" successfully wiped.

Trying to remove a PV that has a VG and LV

# pvremove /dev/sdb /dev/sdc
  PV /dev/sdb is used by VG testvg so please use vgreduce first.
  (If you are certain you need pvremove, then confirm by using --force twice.)
  /dev/sdb: physical volume label not removed.
  PV /dev/sdc is used by VG testvg so please use vgreduce first.
  (If you are certain you need pvremove, then confirm by using --force twice.)
  /dev/sdc: physical volume label not removed.

You can try to force remove with -ff

# pvremove -ff /dev/sdb /dev/sdc
  WARNING: PV /dev/sdb is used by VG testvg.
Really WIPE LABELS from physical volume "/dev/sdb" of volume group "testvg" [y/n]? y
  WARNING: Wiping physical volume label from /dev/sdb of volume group "testvg".
  WARNING: PV /dev/sdc is used by VG testvg.
  Really WIPE LABELS from physical volume "/dev/sdc" of volume group "testvg" [y/n]? y
  WARNING: Wiping physical volume label from /dev/sdc of volume group "testvg".

Volume Groups

Commands reference:

  • lvm (8) - LVM2 tools
  • vgcreate (8) - Create a volume group
  • vgdisplay (8) - Display volume group information
  • vgextend (8) - Add physical volumes to a volume group
  • vgreduce (8) - Remove physical volume(s) from a volume group
  • vgremove (8) - Remove volume group(s)
  • vgs (8) - Display information about volume groups

Creating a Volume Group

vgcreate creates a new VG on block devices. If the devices were not previously intialized as PVs with pvcreate, vgcreate will inititialize them, making them PVs. The pvcreate options for initializing devices are also available with vgcreate.

We create a volume group with vgcreate

# vgcreate [vg name] /dev/device /dev/device2 /dev/device3

For example:

# vgcreate vg1 /dev/sdb /dev/sdc
  Volume group "vg1" successfully created

Listing the volume group

# vgs vg1
  VG   #PV #LV #SN Attr   VSize   VFree
  vg1    2   0   0 wz--n-   5.99g 5.99g

Or with more details

# vgdisplay vg1
  --- Volume group ---
  VG Name               vg1
  System ID              
  Format                lvm2
  Metadata Areas        2
  Metadata Sequence No  1
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                0
  Open LV               0
  Max PV                0
  Cur PV                2
  Act PV                2
  VG Size               5.99 GiB
  PE Size               4.00 MiB
  Total PE              1534
  Alloc PE / Size       0 / 0    
  Free  PE / Size       1534 / 5.99 GiB
  VG UUID               uvHpRZ-BdPH-Nzxy-Lp15-VMps-fzPZ-A1bebc

You can also create a PV with vgcreate (bypassing the need to run pvcreate)

# vgcreate vg2 /dev/sdd
  Physical volume "/dev/sdd" successfully created.
  Volume group "vg2" successfully created

Extending a Volume Group

You can use vgextend to extend volume groups by adding physical volumes to it.

Initialize the new drive as a physical volume with pvcreate

# pvcreate /dev/sde
  Physical volume "/dev/sde" successfully created.

Then Add the new physical volume to the volume group

# vgextend vg1 /dev/sde
  Volume group "vg1" successfully extended

Reducing a Volume Group

vgreduce removes one or more unused PVs from a VG.

List the volume group (note it has 8.99GB of space)

# vgs vg1
  VG  #PV #LV #SN Attr   VSize  VFree  
  vg1   3   0   0 wz--n- <8.99g <8.99g

Remove one of the physical volumes

# vgreduce vg1 /dev/sde
  Removed "/dev/sde" from volume group "vg1"

List the volume group again (now it has 5.99GB)

# vgs vg1
  VG  #PV #LV #SN Attr   VSize VFree
  vg1   2   0   0 wz--n- 5.99g 5.99g

Deleting/Removing a Volume Group

vgremove removes one or more VGs. If LVs exist in the VG, a prompt is used to confirm LV removal.

# vgremove vg1
  Volume group "vg1" successfully removed

Logical Volumes

Commands reference:

  • lvm (8) - LVM2 tools
  • lvcreate (8) - Create a logical volume
  • lvdisplay (8) - Display information about a logical volume
  • lvextend (8) - Add space to a logical volume
  • lvresize (8) - Resize a logical volume
  • lvreduce (8) - Reduce the size of a logical volume
  • lvremove (8) - Remove logical volume(s) from the system
  • lvs (8) - Display information about logical volumes

Creating a Logical Volume

# lvcreate -L 4g [vg name] -n [lv name]

Flags:

  • -n - set the Logical Volume name
  • -l - use extents rather than a specified size
Example

Create the LV

# lvcreate -L 4g vg1 -n lv1
WARNING: ext4 signature detected on /dev/vg1/lv1 at offset 1080. Wipe it? [y/n]: y
  Wiping ext4 signature on /dev/vg1/lv1.
  Logical volume "lv1" created.

Display simple information about the LV

# lvs vg1
  LV   VG  Attr       LSize Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  lv1  vg1 -wi-a----- 4.00g   

Simple information with verbose

# lvs -v vg1
  LV   VG  #Seg Attr       LSize Maj Min KMaj KMin Pool Origin Data%  Meta%  Move Cpy%Sync Log Convert LV UUID                                LProfile
  lv1  vg1    2 -wi-a----- 4.00g  -1  -1  253    2                                                     ADUPcG-YAuo-5vDC-7FEB-Cas9-4Gt0-hR1kVD  

Detailed information

# lvdisplay vg1
  --- Logical volume ---
  LV Path                /dev/vg1/lv1
  LV Name                lv1
  VG Name                vg1
  LV UUID                ADUPcG-YAuo-5vDC-7FEB-Cas9-4Gt0-hR1kVD
  LV Write Access        read/write
  LV Creation host, time localhost.localdomain, 2020-11-18 08:07:29 -0500
  LV Status              available
  # open                 0
  LV Size                4.00 GiB
  Current LE             1024
  Segments               2
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     8192
  Block device           253:2

Extending a Logical Volume

lvextend adds space to a logical volume. The space needs to be available in the volume group.

When extending Logical Volumes, you do not need to unmount the partition (however you will need to extend the file system afterwards, or if the filesystems supports, use the ‘-r’ flag to automatically resize the filesystem).

Checking for available space

Use vgs to see the available space of the volume group

# vgs vg1
  VG  #PV #LV #SN Attr   VSize  VFree  
  vg1   3   1   0 wz--n- <8.99g <4.99g
                           |      |- Available VG space (not allocated to a LV)
                           |- Total size of VG

You can use lvs to confirm that the LV is using the difference of the previous values

# lvs vg1
  LV   VG  Attr       LSize Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  lv1  vg1 -wi-a----- 4.00g  

Or just use vgdisplay and check the PE sizes

# vgdisplay vg1 | grep 'PE /'
  Alloc PE / Size       1024 / 4.00 GiB
  Free  PE / Size       1277 / <4.99 GiB
Extending the Logical Volume

Extend volume to specified size (k/m/g)

# lvextend -L6G /dev/vg1/lv1
  Size of logical volume vg1/lv1 changed from 5.39 GiB (1381 extents) to 6.00 GiB (1536 extents).
  Logical volume vg1/lv1 successfully resized.

Extend the volume by 1GB

# lvextend -L+1G /dev/vg1/lv1
  Size of logical volume vg1/lv1 changed from 6.00 GiB (1536 extents) to 7.00 GiB (1792 extents).
  Logical volume vg1/lv1 successfully resized.

Extend for the full available space in the VG

# lvextend -l +100%FREE /dev/vg1/lv1
  Size of logical volume vg1/lv1 changed from 7.00 GiB (1792 extents) to <8.99 GiB (2301 extents).
  Logical volume vg1/lv1 successfully resized.

Note that lvextend -l 100%FREE /dev/vg1/lv1 (without the plus size) will not work

Extend to the percentage of the VG (60% or 8.99 = 5.394)

# lvextend -l 60%VG /dev/vg1/lv1
  Size of logical volume vg1/lv1 changed from 4.00 GiB (1024 extents) to 5.39 GiB (1381 extents).
  Logical volume vg1/lv1 successfully resized.

You can also use the ‘PE’ size

# lvextend -l +1740 /dev/RHCSA/pinehead  
  Size of logical volume RHCSA/pinehead changed from <3.20 GiB (818 extents) to 9.99 GiB (2558 extents).
  Logical volume RHCSA/pinehead successfully resized.

Shrinking a Logical Volume

Be careful when reducing an LV’s size, because data in the reduced area is lost. Ensure that any file system on the LV is resized before running lvreduce so that the removed extents are not in use by the file system.

You can use two commands to shrink a logical volume:

  • lvreduce reduces the size of an LV. The freed logical extents are returned to the VG to be used by other LVs.
  • lvresize resizes an LV in the same way as lvextend and lvreduce.

Shrink a logical volume by 2GB

# lvresize -L-2G /dev/vg1/lv1
  WARNING: Reducing active logical volume to <6.99 GiB.
  THIS MAY DESTROY YOUR DATA (filesystem etc.)
Do you really want to reduce vg1/lv1? [y/n]: y
  Size of logical volume vg1/lv1 changed from <8.99 GiB (2301 extents) to <6.99 GiB (1789 extents).
  Logical volume vg1/lv1 successfully resized.

Shrink a logical volume to 30% of the volume group size

# lvreduce -l 30%VG  /dev/vg1/lv1
  WARNING: Reducing active logical volume to <2.70 GiB.
  THIS MAY DESTROY YOUR DATA (filesystem etc.)
Do you really want to reduce vg1/lv1? [y/n]: y
  Size of logical volume vg1/lv1 changed from <6.99 GiB (1789 extents) to <2.70 GiB (691 extents).
  Logical volume vg1/lv1 successfully resized.

Deleting/Removing a Logical Volume

lvremove removes one or more LVs. For standard LVs, this returns the logical extents that were used by the LV to the VG for use by other LVs.

# lvremove /dev/vg1/lv1
Do you really want to remove active logical volume vg1/lv1? [y/n]: y
  Logical volume "lv1" successfully removed
code with