How to Install macOS Catalina in Virtual Box

Linux VirtualBox macOS

This document will provide instructions on getting a macOS Catalina install working with Guest Additions on VirtaulBox 6.1.

Start by downloading a copy of the installer from the App Store. You will need a computer running macOS and you will need to convert the installer to an ISO. I will not be covering these steps here, but there are plenty tutorials online.

Installing VirtualBox

I’m hoping you already have this part covered and you are coming here only because you had issues getting macOS installed, but just in case, make sure the following packages are installed (Arch):

$ pac list virtualbox
local/virtualbox 6.1.6-1
    Powerful x86 virtualization for enterprise as well as home use
local/virtualbox-ext-oracle 6.1.6-1
    Oracle VM VirtualBox Extension Pack
local/virtualbox-guest-iso 6.1.6-1
    The official VirtualBox Guest Additions ISO image
local/virtualbox-host-modules-arch 6.1.6-1
    Virtualbox host kernel modules for Arch Kernel

Creating the VM

Start by creating a new VM. Give it a name without spaces.

Give it enough memory so the install will run faster. We can change this later.

Select to create a new Virtual Hard Disk

Select VDI

Select fixed size

Give it a bare minimum of 25GB (I would advise on at least 50GB if you can spare the space)

Edit the machine, go into “System => Motherboard” and disable floppy boot and change the chipset to PIIX3

In the processor give more CPU if you can spare

In “Display => Screen” increase the video memory to 128MB and enable 3D acceleration

Enable USB 3.0

Note: if you can’t see USB 3.0 you might need to add your user to the vboxusers group

Insert the install ISO

Additional Configuration

Now we need to run a few vboxmanage commands for additional settings. Either run the lines with vboxmanage from the script below (add your virtual machine name under $vm_name), or save the full script and run it.

Note: the default resolution is set to “1280x1024”. You can change it in the last line of the update code. Make sure to keep it to “640x480, 800x600, 1280x1024, 1440x900, 1900x1200“

#!/bin/bash

PS3='Please select the VM: '
n=0
while read line ; do
  options[n++]="$line"
done <<<"$(vboxmanage list vms | awk '{$NF=""; print $0}' | tr -d '"')"
select opt in "${options[@]}" ; do
  vm_name="$opt"
  break
done

echo "Running updates for \"${vm_name}\" VM"

vboxmanage modifyvm "$vm_name" --cpuidset 00000001 000106e5 00100800 0098e3fd bfebfbff && echo "Changed CPU ID Set" ; sleep .5
vboxmanage setextradata "$vm_name" "VBoxInternal/Devices/efi/0/Config/DmiSystemProduct" "iMac11,3" && echo "Changed DmiSystemProduct" ; sleep .5
vboxmanage setextradata "$vm_name" "VBoxInternal/Devices/efi/0/Config/DmiSystemVersion" "1.0" && echo "Changed DmiSystemVersion" ; sleep .5
vboxmanage setextradata "$vm_name" "VBoxInternal/Devices/efi/0/Config/DmiBoardProduct" "Iloveapple" && echo "Changed DmiBoardProduct" ; sleep .5
vboxmanage setextradata "$vm_name" "VBoxInternal/Devices/smc/0/Config/DeviceKey" "ourhardworkbythesewordsguardedpleasedontsteal(c)AppleComputerInc" && echo "Changed DeviceKey" ; sleep .5
vboxmanage setextradata "$vm_name" "VBoxInternal/Devices/smc/0/Config/GetKeyFromRealSMC" 1 && echo "Changed GetKeyFromRealSMC" ; sleep .5
vboxmanage setextradata "$vm_name" "VBoxInternal2/EfiGraphicsResolution" "1920x1080" && echo "Changed resolution to 1280x1024" ; sleep .5

Installing macOS

We are now ready for the install. Start the machine and select you language.

Select disk utility.

Select the VirtualBox hard disk and click on erase.

Give it a name, select the same options and click on erase. Close the disk utility when done.

Click on install macOS.

Continue with the install as you would until you are presented with the desktop.

Note that the installer will reboot once in the middle of the install.

Installing Guest Additions

We now need to get Guest Additions installed. Remove the install ISO and mount the guest additions CD. Open it with finder and run the VBoxDarwinAdditions.pkg

Install it as if you would install any other package.

You will most likely get an error message that the install has failed.

Close everything, unmount the guest additions CD and open a terminal window (Command+Space, type terminal).

Now we need to restart the VM in recovery mode. Type in:

sudo nvram "recovery-boot-mode=unused"
sudo reboot

Once it has rebooted into recovery mode, on the top click on “Utilities => Termminal”

Type in the code below. This is to sign Oracle software as authorized. Then we disable recovery mode and restart the VM.

spctl kext-consent add VB5E2TV963
nvram -d recovery-boot-mode
reboot

When the VM has rebooted you should have guest additions working. Remember that not everything works, but the features below should:

  • Copy/Paste
  • ~Drag and Drop~ Reported as not working (see comments)
  • VirtualBox Shared Folders
  • Guest Control

How to Install chrome-remote-desktop on Arch

Linux Arch

Chrome Remote Desktop has been around for quite a while, but now Google offers a .deb installer with native Linux support via Systemd. This is great because it removes the need to setup VPNs and VNC to remote connect to your machines, or in the case that you need to land a hand to a not so technical savvy family member or friend.

Unfortunately the installer is only for Ubuntu (and Debian based distros), but with a few steps we can get it running on Arch, and (thanks to a patch by nightuser) even configure it to use existing X sessions instead of creating a new one (which is the default behavior).

As expected, the packages exists in the AUR, so the install should be pretty simple.

Instructions

Install

a. Install chrome-remote-desktop from the AUR

b. Run crd --setup to configure your connection. Hit any key

c. Select your Desktop Environment (I selected KDE which is what I use) and save the file

d. Press any key again

e. Enter a new resolution if you would like to use something different than the default (1366x768). Save the file

f. You should see the confirmation that the setup is complete

g. Go to https://remotedesktop.google.com/headless and click on ‘Begin’

h. Click on ‘Next’

i. Click on ‘Authorize’

j. Select the Goole account you would like to use

k. Give it permission

l. Click on the copy the button and paste it on your terminal

m. Give the computer a friendly name and a pin to access it

You should get a confirmation that everything went ok

Starting Xvfb on display :20
X server is active.
Launching X session: ['/bin/sh', '/home/victor/.chrome-remote-desktop-session']
Launching host process
['/opt/google/chrome-remote-desktop/chrome-remote-desktop-host', '--host-config=-', '--audio-pipe-name=/home/victor/.config/chrome-remote-desktop/pulseaudio#ae6329c099/fifo_output', '--server-supports-exact-resize', '--ssh-auth-sockname=/tmp/chromoting.victor.ssh_auth_sock', '--signal-parent']
wait() returned (1092272,0)
Host ready to receive connections.
Log file: /tmp/chrome_remote_desktop

Additional Configuration

The additional configuration will allow you to connect to an existing session instead of creating a new one when connecting.

a. Find what display number X is using

$ echo $DISPLAY
:0

b. Create a file in ~/.config/chrome-remote-desktop/Xsession with the display value

$ echo "0" > ~/.config/chrome-remote-desktop/Xsession

c. Stop the chrome-remote-desktop.service

$ systemctl --user stop chrome-remote-desktop.service

d. Check if it stopped with crd --status. If it did not, stop it with crd --stop

$ crd --status
CRD status: STOPPED

e. Take a backup of /opt/google/chrome-remote-desktop/chrome-remote-desktop

f. Download the patched /opt/google/chrome-remote-desktop/chrome-remote-desktop to the same location, or follow the instructions to manually modify your file here.

Note: The patched version was tested with chrome-remote-desktop 81.0.4044.60-1

g. Start the agent with crd --start so you can see verbose output. You should receive a confirmation when it starts

$ crd --start

Launching X server and X session.
Using existing Xorg session: 0
Launching host process
['/opt/google/chrome-remote-desktop/chrome-remote-desktop-host', '--host-config=-', '--audio-pipe-name=/home/victor/.config/chrome-remote-desktop/pulseaudio#ae6329c099/fifo_output', '--ssh-auth-sockname=/tmp/chromoting.victor.ssh_auth_sock', '--signal-parent']
Host ready to receive connections.
Log file: /tmp/chrome_remote_desktop_20200402_202207_2vtQSb

h. Go to https://remotedesktop.google.com/ from another computer and try to access your computer

How to Move Plex Installation on FreeNAS 11.3

FreeNAS

I had some free time this weekend and decided to upgrade my FreeNAS. I went from 11.1 to 11.3-UI and the upgrade installed without any issues. However, after the reboot I discovered that my jails and plugins were missing from the UI and that they were not running. I had read the manual (FreeNAS® 11.3-U1 User Guide) before the upgrade and and the instructions did not mention anything about the plugins, so I was little worried.

After spending a lot of time researching I discovered that on FreeNAS 11.2 the project started to use the ‘iocage’ jail method instead of ‘warden’. FreeNAS 11.2 had the option of migrating your jails, and it could even display then from the UI. But for 11.3-UI that was no longer an option.

If you are on the same boat as me, the instructions below will help you quickly re-create a new Plex jail a move your old data to the new jail. If you have not upgraded to 11.3-UI you might want to convert your jail before upgrading. There are a lot of tutorials on-line on how to convert your jail that might be more useful to you.

Instructions

a. Create the plex user with UID 972 (this is the username and UID that is used by the project)

b. If desired, create a new Dataset to have Plex data outside of the plugin Dataset. I won’t go into details for this type of setup here as I keep my Plex data inside the Plex plugin Dataset

c. Install the ‘Plex Media Server’ plugin (official instructions)

d. Stop the plugin

e. Go to ‘Storage => Pools’ and edit the ACL for the Dataset where your media is saved. We want to give access to the ‘plex’ user (in case the files are not owned by ‘plex’)

f. With the plugin still stopped, copy the old installation data folder from the old plugin Dataset to the new plugin Dataset

Note: The JAIL_ROOT location will vary between different FreeNAS versions:

  • FreeNAS 11.1 and bellow (warden) - JAIL_ROOT=/mnt/[Volume]/jails/[JAIL_NAME]
  • FreeNAS 11.2 and above (iocage) - JAIL_ROOT=/mnt/[Volume]/iocage/jails/[JAIL_NAME]

Source for your old Plex plugin (warden)

If installed manually

${JAIL_ROOT}/root/usr/local/plexdata/Plex Media Server/

If installed via plugin

${JAIL_ROOT}/var/db/plexdata/Plex Media Server/

Destination (iocage)

${JAIL_ROOT}/root/Plex Media Server/

g. In the jails configuration menu, select the new Plex jail and add the mount point for the media folder. Try to keep the same path as the old jail so you won’t have to edit your library. If you don’t remember that the path was, you can access it by looking at the contents of /mnt/[Volume]/jails/.[JAIL_NAME].meta/fstab

# cat .plex.meta/fstab
/mnt/Volume1/Movies /mnt/Volume1/jails/plex//mnt/Media nullfs rw 0 0

h. Start the plugin and try to access it via web


References:

RHCSA v8: Work with Package Module Streams

Linux RedHat RHCSA

Disclaimer

These are my study notes for the RHCSA exam on YUM modules. There’s most likely more information than what’s needed for the exam, and I cannot guarantee that all information is correct.

Definition

RHEL 8 content is distributed through two main repositories: BaseOS and AppStream.

BaseOS

Content in the BaseOS repository is intended to provide the core set of the underlying OS functionality that provides the foundation for all installations. This content is available in the RPM format and is subject to support terms similar to those in previous releases of Red Hat Enterprise Linux.

AppStream

Content in the AppStream repository includes additional user-space applications, runtime languages, and databases in support of the varied workloads and use cases. Content in AppStream is available in one of two formats - the familiar RPM format and an extension to the RPM format called modules.

Components made available as Application Streams can be packaged as modules or RPM packages and are delivered through the AppStream repository in Red Hat Enterprise Linux 8. Each AppStream component has a given life cycle.

Modules

Modules allow you to install a specific version and/or type of an application in your system. For example, for ‘postgresql’ you can choose to install from multiple versions (stream), and client or server type (profile).

# yum module list postgresql

Last metadata expiration check: 0:20:44 ago on Sat 14 Mar 2020 08:59:58 PM UTC.
CentOS-8 - AppStream
Name                     Stream              Profiles                        Summary
postgresql               9.6           client, server [d]              PostgreSQL server and client module
postgresql               10 [d]        client, server [d]              PostgreSQL server and client module
postgresql               12            client, server                  PostgreSQL server and client module

Hint: [d]efault, [e]nabled, [x]disabled, [i]nstalled

For httpd on Centos8, currently only one stream (version) is available, and profiles are the package type (common, minimal, development)

# yum module list httpd

Last metadata expiration check: 0:21:46 ago on Sat 14 Mar 2020 08:59:58 PM UTC.
CentOS-8 - AppStream
Name                  Stream               Profiles                              Summary
httpd                 2.4 [d][e]           common [d], devel, minimal            Apache HTTP Server

Hint: [d]efault, [e]nabled, [x]disabled, [i]nstalled

Working with Modules

Getting Information on Modules

Listing all modules

# yum module list

Listing module summary for one module with yum module list [module]

# yum module list httpd

Last metadata expiration check: 0:21:46 ago on Sat 14 Mar 2020 08:59:58 PM UTC.
CentOS-8 - AppStream
Name                  Stream               Profiles                              Summary
httpd                 2.4 [d][e]           common [d], devel, minimal            Apache HTTP Server

Listing info on a module with yum module info [module]

# yum module info httpd

Last metadata expiration check: 0:35:45 ago on Sat 14 Mar 2020 08:59:58 PM UTC.
Name             : httpd
Stream           : 2.4 [d][e][a]
Version          : 8010020191223202455
Context          : cdc1202b
Architecture     : x86_64
Profiles         : common [d], devel, minimal
Default profiles : common
Repo             : AppStream
Summary          : Apache HTTP Server
Description      : Apache httpd is a powerful, efficient, and extensible HTTP server.
Artifacts        : httpd-0:2.4.37-16.module_el8.1.0+256+ae790463.src
                 : httpd-0:2.4.37-16.module_el8.1.0+256+ae790463.x86_64
                 : httpd-debuginfo-0:2.4.37-16.module_el8.1.0+256+ae790463.x86_64
                 : httpd-debugsource-0:2.4.37-16.module_el8.1.0+256+ae790463.x86_64
                 : httpd-devel-0:2.4.37-16.module_el8.1.0+256+ae790463.x86_64
                 : httpd-filesystem-0:2.4.37-16.module_el8.1.0+256+ae790463.noarch
                 : httpd-manual-0:2.4.37-16.module_el8.1.0+256+ae790463.noarch
                 : httpd-tools-0:2.4.37-16.module_el8.1.0+256+ae790463.x86_64
                 : httpd-tools-debuginfo-0:2.4.37-16.module_el8.1.0+256+ae790463.x86_64
                 : mod_http2-0:1.11.3-3.module_el8.1.0+213+acce2796.src
                 : mod_http2-0:1.11.3-3.module_el8.1.0+213+acce2796.x86_64
                 : mod_http2-debuginfo-0:1.11.3-3.module_el8.1.0+213+acce2796.x86_64
                 : mod_http2-debugsource-0:1.11.3-3.module_el8.1.0+213+acce2796.x86_64
                 : mod_ldap-0:2.4.37-16.module_el8.1.0+256+ae790463.x86_64
                 : mod_ldap-debuginfo-0:2.4.37-16.module_el8.1.0+256+ae790463.x86_64
                 : mod_md-0:2.4.37-16.module_el8.1.0+256+ae790463.x86_64
                 : mod_md-debuginfo-0:2.4.37-16.module_el8.1.0+256+ae790463.x86_64
                 : mod_proxy_html-1:2.4.37-16.module_el8.1.0+256+ae790463.x86_64
                 : mod_proxy_html-debuginfo-1:2.4.37-16.module_el8.1.0+256+ae790463.x86_64
                 : mod_session-0:2.4.37-16.module_el8.1.0+256+ae790463.x86_64
                 : mod_session-debuginfo-0:2.4.37-16.module_el8.1.0+256+ae790463.x86_64
                 : mod_ssl-1:2.4.37-16.module_el8.1.0+256+ae790463.x86_64
                 : mod_ssl-debuginfo-1:2.4.37-16.module_el8.1.0+256+ae790463.x86_64

Listing profiles with yum module info --profile [module]

# yum module info --profile httpd

Last metadata expiration check: 0:36:28 ago on Sat 14 Mar 2020 08:59:58 PM UTC.
Name    : httpd:2.4:8010020191223202455:cdc1202b:x86_64
common  : httpd
        : httpd-filesystem
        : httpd-tools
        : mod_http2
        : mod_ssl
devel   : httpd
        : httpd-devel
        : httpd-filesystem
        : httpd-tools
minimal : httpd

You can also filter the information with [module_name]:[stream]

# yum module info --profile php:7.3

Enabling Stream

Note that switching module streams will not alter installed packages. You will need to remove a package, enable the stream and then install the package.

Enable the stream for ‘postgresql’ v9.6

# yum module enable postgresql:9.6

Enable the httpd devel profile

# yum module enable --profile httpd:2.4/devel

Last metadata expiration check: 0:47:51 ago on Sat 14 Mar 2020 08:59:58 PM UTC.
Ignoring unnecessary profile: 'httpd/devel'
Dependencies resolved.
Nothing to do.
Complete!

Then install the package

# yum install postgresql httpd  

To change a module stream again, you will need to run yum module reset [module name], and then enable the new module.

# yum module enable postgresql:10

Last metadata expiration check: 0:06:07 ago on Sat 14 Mar 2020 09:57:50 PM UTC.
Dependencies resolved.
The operation would result in switching of module 'postgresql' stream '9.6' to stream '10'
Error: It is not possible to switch enabled streams of a module.
It is recommended to remove all installed content from the module, and reset the module using 'dnf module reset <module_name>' command. After you reset the module, you can install the other stream.
# yum module reset postgresql

Last metadata expiration check: 0:06:15 ago on Sat 14 Mar 2020 09:57:50 PM UTC.
Dependencies resolved.
=================================================================================================
Package               Architecture         Version                  Repository             Size
=================================================================================================
Resetting modules:
postgresql                                                                                

Transaction Summary
=================================================================================================

Is this ok [y/N]: y
Complete!

RHCSA v8: Configure Disk Compression

Linux RedHat RHCSA

Disclaimer

These are my study notes for the RHCSA exam on disk compression. There’s most likely more information than what’s needed for the exam, and I cannot guarantee that all information is correct.

Definition

Virtual Data Optimizer (VDO) provides inline data reduction for Linux in the form of deduplication, compression, and thin provisioning. When you set up a VDO volume, you specify a block device on which to construct your VDO volume and the amount of logical storage you plan to present.

In the Red Hat Enterprise Linux 7.5 Beta, we introduced virtual data optimizer (VDO). VDO is a kernel module that can save disk space and reduce replication bandwidth. VDO sits on top of any block storage device and provides zero-block elimination, deduplication of redundant blocks, and data compression.

VDO can be applied to a block device, and then normal disk operations can be applied to that device. LVM for example, can sit on top of VDO.

Physical disk -> VDO -> Volumegroup -> Logical volume -> file system

Requirements and Recommendations

Memory

Each VDO volume has two distinct memory requirements:

The VDO module

VDO requires 370 MB of RAM plus an additional 268 MB per each 1 TB of physical storage managed by the volume.

The Universal Deduplication Service (UDS) index

UDS requires a minimum of 250 MB of RAM, which is also the default amount that deduplication uses.

The memory required for the UDS index is determined by the index type and the required size of the deduplication window:

Note: Sparse is the recommended configuration.

Storage

Logical Size

Specifies the logical VDO volume size. The VDO Logical Size is how much storage we tell the OS that we have. Because of reduction and deduplication, this number will be bigger than the real physical size. This ratio will vary according to the type of data that is being stored (binary, video, audio, compressed data will have a very low ratio).

Red Hat’s Recommendation

For active VMs or container storage

Use logical size that is ten times the physical size of your block device. For example, if your block device is 1TB in size, use 10T here.

For object storage

Use logical size that is three times the physical size of your block device. For example, if your block device is 1TB in size, use 3T here.

Slab Size

Specifies the size of the increment by which a VDO is grown. All of the slabs for a given volume will be of the same size, which may be any power of 2 multiple of 128 MB up to 32 GB. At least one entire slab is reserved by VDO for metadata, and therefore cannot be used for storing user data.

The default slab size is 2 GB in order to facilitate evaluating VDO on smaller test systems. A single VDO volume may have up to 8096 slabs. Therefore, in the default configuration with 2 GB slabs, the maximum allowed physical storage is 16 TB. When using 32 GB slabs, the maximum allowed physical storage is 256 TB.

The table above is from RHEL 7 documentation

Examples of VDO System Requirements by Physical Volume Size

The following tables provide approximate system requirements of VDO based on the size of the underlying physical volume. Each table lists requirements appropriate to the intended deployment, such as primary storage or backup storage.

Deduplication, Indexing and Compression

Deduplication and Index

VDO uses a high-performance de-duplication index called UDS to detect duplicate blocks of data as they are being stored. The UDS index provides the foundation of the VDO product. For each new piece of data, it quickly determines if that piece is identical to any previously stored piece of data. If the index finds match, the storage system can then internally reference the existing item to avoid storing the same information more than once.

Deduplication is enabled by default.

To disable deduplication during VDO block creation (so only compression is used), use the --deduplication=disabled option (you will not be able to use the sparseIndex option)

# vdo create --name=[name] --device=/dev/[device] --vdoLogicalSize=[VDO logical size] --deduplication=disabled --vdoSlabSize=[slab size]

To enable/disable deduplication on an existing block

# vdo enableDeduplication --name=my_vdo

# vdo disableDeduplication --name=my_vdo

Compression

In addition to block-level deduplication, VDO also provides inline block-level compression using the HIOPS Compression™ technology.

VDO volume compression is on by default.

Compression operates on blocks that have not been identified as duplicates. When unique data is seen for the first time, it is compressed. Subsequent copies of data that have already been stored are deduplicated without requiring an additional compression step.

Configuration Steps

Install vdo (and if not installed by default kmod-vdo)

# yum install vdo

Start the service

# systemctl start vdo.service

Create the volume

# vdo create --name=[name] --device=/dev/[device] --vdoLogicalSize=[VDO logical size] --sparseIndex=enabled --vdoSlabSize=[slab size]

Note: Using --sparseIndex=disabled will enable ‘dense’ indexing

Optionally add LVM config, and/or create the file system (make sure to use the option to not discard blocks)

# mkfs.ext4 -E nodiscard /dev/mapper/[name]

# mkfs.xfs -K /dev/mapper/[name]

Update the system with the new device

# udevadm settle

Mount the device

# mount /dev/mapper/[name] /mount/point

To add it to /etc/fstab. You will need to add additional params so that systemd waits for VDO to start before mounting

# /dev/mapper/vdo-device /mount/point [fstype] defaults,_netdev,x-systemd.device-timeout=0,x-systemd.requires=vdo.service 0 0

See man pages for systemd.mount:

x-systemd.device-timeout=
          Configure how long systemd should wait for a device to show up before
          giving up on an entry from /etc/fstab. Specify a time in seconds or
          explicitly append a unit such as "s", "min", "h", "ms".

x-systemd.requires=
          Configures a Requires= and an After= dependency between the created mount
          unit and another systemd unit, such as a device or mount unit.

Administration

Check for real physical space usage

# vdostats --human-readable

Device              Size   Used   Available   Use%   Space Saving%
/dev/mapper/my_vdo  1.8T  407.9G    1.4T       22%       21%

References:

code with