UFS Explorer
HomeContact
  Solutions Products Services Download Order Knowledge Base Support
Knowledge Base
 
 

How to: Identify drives order from XFS NAS

Generally, NAS storages similar to Buffalo Terastation, Iomega StorCenter, Synology etc. base on software RAID configurations built on data partitions – the largest partitions – of each drive. These NASes employ XFS file system distributed across the data partitions.

For successful assembly of RAID configuration for further data recovery you'll need to know the correct order of RAID disks of your NAS.

The article below explains how to identify the order of drives in four-disk XFS-based NAS of Buffalo Terastation, Iomega StorCenter, Synology and similar NAS models.

Contents


Ways and means

Before you start data recovery from your XFS NAS and, if necessary, reconstruction of embedded RAID, you should know RAID parameters and the order of RAID drives.

The best means for drives order identification is analysis of RAID drives content using known data fragments at data partition starts. CI Hex Viewer offers the most effective means and techniques for content analysis. At the same time, some powerful data recovery software offer easier ways to identify RAID parameters – automatic RAID detection.

As NAS storages don't provide direct logical access to their file systems, and XFS NASes are not an exclusion, you should begin with storage disassembling and connecting its drives to a recovery PC. Please read HOW TO: Connect IDE/SATA drive to a recovery PC for instructions.




Automatic RAID detection

XFS-based NASes usually employ MD (Multiple Devices) software RAID configurations. Such RAID configurations are created with well-known 'mdadm' utility and are capable of describing linear (JBOD), multipath, RAID 0 (stripe), RAID 1 (mirror), RAID 5 and RAID 6 configurations. This utility creates pseudo-partitions with metadata information sufficient to build RAID automatically.

SysDev Laboratories advise their UFS Explorer software as powerful utilities that support automatic detection, reconstruction and data recovery from software RAID configurations. UFS Explorer RAID Recovery was specially designed to work with complex RAID systems. UFS Explorer Professional Recovery offers professional approach to data recovery process. These software have embedded tools for RAID recovery. Other UFS Explorer products work with RAID systems via plug-in modules. For more detailed information, please, go to http://www.ufsexplorer.com/products.php.

We advise UFS Explorer RAID Recovery for your NAS as a software specially designed to work with RAID configurations.

To build RAID automatically with UFS Explorer RAID Recovery you should:

  • Run the software;

  • Make sure that all NAS drives (or drive image files) are opened;

  • Select ANY data partition of the software RAID to add it to virtual RAID;

  • Once the partition is added and MD metadata is detected, the software will ask if you want to try to assemble RAID automatically;

  • Press 'Yes' to build RAID automatically: the software will load disk partitions in the correct order and with correct RAID parameters;

  • Press 'Build' to add RAID to UFS Explorer for further operations.

Note: If RAID parameters of your NAS were reset to a different RAID level, drives order or stripe size, previous RAID configuration requires manual detection. Press 'No' to software dialog, refuse automatic RAID assembly and use manual specification of RAID parameters.




Analyzing disk content

The best way to detect RAID parameters and identify the order of RAID drives precisely is to conduct in-depth analysis of disk contents. CI Hex Viewer software provides effective means for qualitative low-level data analysis. This software is distributed free of charge.

To prepare for content analysis you should carry out the following actions:

  1. Connect the drives to a recovery PC;
    Linux users: do not mount file systems from NAS drives!
    Mac users: avoid all diagnose, repair and similar operations on disks using disk utilities!
  2. Boot the PC, install and run CI Hex Viewer software;
    Windows XP and below: run the software as Administrator user account;
    Windows Vista/7 with UAC: run the software as Administrator from context menu;
    Mac OS: authenticate yourself as system Administrator when the program starts;
    Linux: from command line run 'sudo cihexview' or 'su root -c cihexview'.
  3. Click 'Open Disk Storage' (Ctrl+Shift+'O'); open data partition of each NAS drive.

Each NAS drive has the same partition structure: 1-3 small 'system' partitions (with total size of about several gigabytes) and a large data partition (usually over 95% of total drive capacity). More information about partitions layout is available here.




RAID configuration and advanced detection of drives order

To start disk content analysis open hexadecimal view of each data partition of all NAS drives in CI Hex Viewer.
Below you will find an example of content analysis for a default RAID 5 configuration with 64KB stripe size and XFS file system.



XFS start

Fig. 1. XFS file system start (superblock).


Start block (or superblock) of XFS file system contains 'XFSB' string at the start, values of file system parameters and many zeros. Valid superblock never contains any non-zero data at a range from 0x100..0x200 bytes. This property makes it easy to identify superblock validity.



I-nodes block

Fig. 2. XFS I-nodes block.


In this XFS file system I-nodes block lays at offset 64KB. In RAID 0 and RAID 5 layouts with default 64K stripe size I-nodes block locates at zero offset of data partition of Drive2.
I-nodes can be identified by 'IN' string ('49 4E' byte sequence) at the start of each 256 (0x100) byte blocks. Each I-node describes file system object.

The upper digit of the third byte defines an object type. 4X byte indicates a directory and 8X – a file.
In Figure 2 the first I-node indicates a directory and the second one – a file.



Parity block

Fig. 3. RAID5 parity block.


Parity block contains a mixture of the data from data blocks of other drives. It may look like 'garbage' with visible fragments of data from data blocks.

Even if the parity block contains valid 'XFSB' string as the superblock it also contains non-zero data at 0x100..0x200 bytes range that makes it different from the superblock. Please also note that the parity block usually contains much more non-zero bytes.

Now, using this known content and assuming that the 'start block' is the first block of the data partition of the given drive, you can define RAID configuration:


RAID 5:
  • Only one first block will contain superblock (Fig.1);

  • If stripe size is 64KB (usual for Terastation), one of the first blocks will contain I-nodes; the first I-node indicates a directory (root directory). If the root directory contained few files their names are given in the I-node body (as in Fig.2);

  • Start block of the third drive will contain the data or I-nodes table;

  • Start block of the fourth drive will contain parity (Fig. 3);

  • If you try to apply XOR operation to bytes from the start blocks of each disk at same byte position you will always get zero result.

One can define RAID 5 configuration as RAID with only one superblock in the start block and parity. XOR operation over the bytes of each start block at the same byte position gives zero result.

Drives order is as follows: drive with superblock – the first; drive with root directory – the second; drive with parity – the fourth; remaining drive – the third. How to check parity:

  1. Choose partition offset with non-zero data;

  2. Run calculator (e.g. Windows standard calculator);

  3. Choose 'View' as 'Scientific' or 'Programming', switch the mode from 'Dec' to 'Hex';

  4. Type a hexadecimal digit from the first drive, press 'Xor' button;

  5. Type a hexadecimal digit from the next drive at exactly the same offset and press 'Xor' again;

  6. Repeat until the last drive. Before you enter the digit from the last drive, the calculator must show the same number as at specified position of the last disk. 'Xor' operation will give zero.

Non-zero value for any of the offsets indicates either calculation error or absence of parity.


RAID 0:
  • Only one first block will contain superblock (Fig.1);

  • If the stripe size is 64KB (usual for Terastation), one of the first blocks will contain I-nodes; the first I-node must indicate a directory (root directory). If the root directory contains few files their names are given in the I-node body (as in Fig.2);

  • Any other first blocks won't contain other superblocks or parity;

  • Other drives may contain more I-nodes in the first block.

One can define RAID0 configuration as RAID with only one superblock in the start block and without parity.

Drives order is as follows: drive with superblock – the first; drive with root directory – the second. The order 3rd and the 4th drives can be not identified at once, but you can try both and find which of them is the right one.



RAID 10/0+1:
  • First blocks of two drives will contain valid superblock (Fig.1);

  • Other two drives will contain data in the start block and for 64KB stripe size – I-nodes.

One can define RAID 10/0+1 configuration as RAID with two superblocks in the start blocks.

Drives order is as follows: drive with superblock – the first, drive without superblock (data or I-nodes) – the second. These configuration has two such pairs and both of them can be used for data recovery.



RAID 1 and multi-part storage:
  • First blocks of each drive will contain valid superblock (Fig.1).

One can define RAID 1/and multi-part storage as RAID with superblocks in all start blocks.

Drives order is as follows: any drive from RAID 1 gives all the data. For multi-part storage – each drive has a separate valid file system.

If the content analysis gave a contradictory result and you still remain unsure about the drives order, try all combinations and choose the matching one.


Note: UFS Explorer software don't modify the data on the storage. You can try different RAID combinations until you get appropriate.





Final notes

In case of any physical damages it's strongly recommended to bring your NAS to a specialized data recovery laboratory in order to avoid data loss.

If you feel unsure that you are capable of conducting data recovery operations from your NAS by yourself or not confident about RAID configuration in your NAS, feel free to use professional services provided by SysDev Laboratories.

For data recovery professionals SysDev Laboratories offer expert NAS storage analysis on commercial basis.




Last update: 18.04.2012