Questions about SNMP, Power, System, Logs, disk, & RAID.
storageman
Ask me anything
Posts: 5506 Joined: Thu Sep 22, 2011 10:57 pm
Post
by storageman » Thu Sep 13, 2018 3:34 pm
cosmin0608 wrote: Hello,
You have any idea if i can transfer all files and config from an NAS to another?
Qnap support was connected yesterday and the problem is still there. They ar not connected anymore. So i want to transfer all files and config from an nas to another.
Thanx everyone.
What does that mean? Couldn't fix it, asked for further advice or what?
Since you can't mount the volumes you won't be able to replicate what's on there.
cosmin0608
Starting out
Posts: 29 Joined: Tue Sep 11, 2018 4:09 pm
Post
by cosmin0608 » Thu Sep 13, 2018 3:42 pm
They don't asked anything. They do not even answer the open ticket past already and too many days since nas is down. This is the reason because i want to transfer files fron a nas with problem to another.
storageman
Ask me anything
Posts: 5506 Joined: Thu Sep 22, 2011 10:57 pm
Post
by storageman » Thu Sep 13, 2018 3:51 pm
Jeez! Are they coming back???
Can you run "pvscan" and post the output?
cosmin0608
Starting out
Posts: 29 Joined: Tue Sep 11, 2018 4:09 pm
Post
by cosmin0608 » Thu Sep 13, 2018 3:52 pm
Here is the code.
I don't know if they are come back. On the morning, in log, i found the remote application was reinstalled.
Code: Select all
[~] # pvscan
Found duplicate PV H2KqzqqEjTHSwm5EQWmxf0GJknhBX1sn: using /dev/drbd1 not /dev /md1
Using duplicate PV /dev/drbd1 from subsystem DRBD, ignoring /dev/md1
Found duplicate PV H2KqzqqEjTHSwm5EQWmxf0GJknhBX1sn: using /dev/drbd1 not /dev /md1
Using duplicate PV /dev/drbd1 from subsystem DRBD, ignoring /dev/md1
PV /dev/drbd1 VG vg1 lvm2 [72.68 TiB / 0 free]
Total: 1 [72.68 TiB] / in use: 1 [72.68 TiB] / in no VG: 0 [0 ]
[~] #
storageman
Ask me anything
Posts: 5506 Joined: Thu Sep 22, 2011 10:57 pm
Post
by storageman » Thu Sep 13, 2018 4:09 pm
This looks like something up with "etc/lvm/lvm.conf "
https://access.redhat.com/documentation ... _multipath
Remind me, have you backups?
You could try this at your risk or wait for Qnap.
This tries to bring all LVM parts online:
"vgchange -ay vg1"
Otherwise I'm kinda out of ideas...
cosmin0608
Starting out
Posts: 29 Joined: Tue Sep 11, 2018 4:09 pm
Post
by cosmin0608 » Thu Sep 13, 2018 4:19 pm
I will wait from qnap few hours. If nothing happens, i will try to bring LVM online. I have backup bun not for all files.
storageman
Ask me anything
Posts: 5506 Joined: Thu Sep 22, 2011 10:57 pm
Post
by storageman » Thu Sep 13, 2018 4:20 pm
Make sure you ask them why this happened!
Let us know the fix, we're all ears!
cosmin0608
Starting out
Posts: 29 Joined: Tue Sep 11, 2018 4:09 pm
Post
by cosmin0608 » Thu Sep 13, 2018 4:26 pm
Sure. If they answer....ever.
cosmin0608
Starting out
Posts: 29 Joined: Tue Sep 11, 2018 4:09 pm
Post
by cosmin0608 » Thu Sep 13, 2018 6:05 pm
The state of md13 is clean, degraded.
Can be a problem from here?
Code: Select all
[~] # mdadm -D /dev/md13
/dev/md13:
Version : 1.0
Creation Time : Thu Nov 3 20:18:40 2016
Raid Level : raid1
Array Size : 458880 (448.20 MiB 469.89 MB)
Used Dev Size : 458880 (448.20 MiB 469.89 MB)
Raid Devices : 24
Total Devices : 11
Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Thu Sep 13 11:31:15 2018
State : clean, degraded
Active Devices : 11
Working Devices : 11
Failed Devices : 0
Spare Devices : 0
Name : 13
UUID : 8a3fada4:605b5aa5:8f64465e:1be91c61
Events : 514618
Number Major Minor RaidDevice State
0 8 4 0 active sync /dev/sda4
1 8 20 1 active sync /dev/sdb4
4 0 0 4 removed
3 8 100 3 active sync /dev/sdg4
4 8 116 4 active sync /dev/sdh4
5 8 132 5 active sync /dev/sdi4
6 8 148 6 active sync /dev/sdj4
7 8 164 7 active sync /dev/sdk4
24 8 36 8 active sync /dev/sdc4
25 8 52 9 active sync /dev/sdd4
26 8 68 10 active sync /dev/sde4
27 8 84 11 active sync /dev/sdf4
24 0 0 24 removed
26 0 0 26 removed
28 0 0 28 removed
30 0 0 30 removed
32 0 0 32 removed
34 0 0 34 removed
36 0 0 36 removed
38 0 0 38 removed
40 0 0 40 removed
42 0 0 42 removed
44 0 0 44 removed
46 0 0 46 removed
storageman
Ask me anything
Posts: 5506 Joined: Thu Sep 22, 2011 10:57 pm
Post
by storageman » Thu Sep 13, 2018 6:34 pm
Going back over your pics,
Where is disk 3?
Is it definitely there?
Weird how MD1 online.
Can you try,
"echo repair > /sys/block/md1/md/sync_action"
cosmin0608
Starting out
Posts: 29 Joined: Tue Sep 11, 2018 4:09 pm
Post
by cosmin0608 » Thu Sep 13, 2018 6:41 pm
Disk 3 is not there. I removed from qnap.
I already try echo repair and after 14 hours of sync, no result.
storageman
Ask me anything
Posts: 5506 Joined: Thu Sep 22, 2011 10:57 pm
Post
by storageman » Thu Sep 13, 2018 6:44 pm
This is starting to not make sense, you started by saying there were 12 disks?
How many disks were in the original RAID 5? 11 or 12?
cosmin0608
Starting out
Posts: 29 Joined: Tue Sep 11, 2018 4:09 pm
Post
by cosmin0608 » Thu Sep 13, 2018 6:50 pm
In the original was 12 disk. One of them had a SMART problem and i removed (because file was in readonly and i removed that disk). I rebuild the raid and now is 11 disks.
Really sorry for my english.
So the readonly problem occurred when a SMART error occurred on a disk.
storageman
Ask me anything
Posts: 5506 Joined: Thu Sep 22, 2011 10:57 pm
Post
by storageman » Thu Sep 13, 2018 6:56 pm
Impossible to go from 12 disks RAID 5 to 11 disks RAID 5 and not be degraded.
So why aren't you replacing disk 3?
Something is really screwy here.
cosmin0608
Starting out
Posts: 29 Joined: Tue Sep 11, 2018 4:09 pm
Post
by cosmin0608 » Thu Sep 13, 2018 7:00 pm
Because i ordered and i wait from him.