I turned of the NAS and removed disk2 (6TB) which is the one I am not able to add again. After restart of the NAS (with empty bay 2) the system still showed disk1 two times for volume1. The raid.conf still showed both entries, too.
Therefore I decided to use:
mdadm --grow /dev/md1 --raid-devices=1 --force
via SSH to downgrade the RAID1 to single disk.
After a reboot in the user interface the volume was now shown "ready", but still the disk1 was shown two times.
Therefore I decided to check raid.conf and saw that still data_0 and data_1 were shown and I removed the two hints to the second disk.
After that and another reboot volume1 was gone.
And I was expecting to need my backup...
Same after deleting data_0 (instead of deleteing data_1) from the raid.conf and another reboot.
After copying back the original raid.conf and another reboot the volume1 was reinitiated. With only one reference to disk1! No bakcup needed.
I assume (and really do not know) that these references shown in the user interface are not the ones from raid.conf and that they had to be rebuilt by reconnecting the disk to the volume. What do you think?
Hot"plugging" the second disk, it was recocnized again - and finally I was able to upgrade the "single disk raid" back to a RAID1 with both disks.
The user interface states 15h for rebuilt - but this does not matter...
Even though I do not understand what happend in the background when editing the raid.conf I learned a lot of things.
So thanks for all the support - especially to dolbyman!
PS: Disk2 was checked in a windows PC with diskpart - no partitions. So I am quite sure the error was in some corrupted configuration files.