What does mean «mdadm: /dev/md/<id> has been started with 1 drive (out of 2)»?

mdadm: /dev/md/2 has been started with 1 drive (out of 2).
mdadm: /dev/md/1 has been started with 1 drive (out of 2).
mdadm: /dev/md/0 has been started with 1 drive (out of 2).

The result says that all three software RAID arrays (md0, md1, md2), which you tried to assemble, started in a state where only one disk out of two is active, that is, each array is in a «degraded» state.
If these were, for example, RAID1 arrays, then with only one disk present they can still function, but without fault tolerance (if the only active disk fails, all data will be at risk).
If it was RAID0 (or any other level that requires all disks), then the array might be non-functional or lose some data.
The main reason is usually that the second disk:

  1. Not recognized by the system (hardware-level error with cables or controller).
  2. Does not have a suitable signature block (metadata writing is corrupted, or the disk was not added).
  3. Disabled by the administrator or previously «kicked out» of the array.
    In order to understand the situation more precisely, you need to check:
  • The kernel log (dmesg),
  • The status of each physical disk (smartctl, fdisk/lsblk, etc.),
  • The configuration /etc/mdadm/mdadm.conf or the output of mdadm --detail /dev/mdX.